Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD TO ORGANIZE DATA TRAFFIC AFFECTING A VEHICLE
Document Type and Number:
WIPO Patent Application WO/2023/093981
Kind Code:
A1
Abstract:
The invention is concerned with a method to organize data traffic affecting a vehicle (1), particularly an autonomously driving vehicle (1). The method comprises providing (S2) environmental situation data (20) representing a situation of an environment of the vehicle (1), and vehicle situation data (21) representing a situation of the vehicle (1). It further comprises determining (S3) a resolution requirement information (28) by applying a resolution requirement determination algorithm (27) to the provided environmental situation data (20) and vehicle situation data (21), wherein the determined resolution requirement information (28) represents a required resolution of data. Besides, the method comprises providing (S4) data with a resolution according to the determined resolution requirement information (28).

Inventors:
GROSSETTI GIOVANNI (DE)
BRIESE STEFAN (DE)
MFON EMMANUEL VERANYUY (DE)
UZUN SERDAL (DE)
JOSHI BHUSHAN (DE)
Application Number:
PCT/EP2021/082885
Publication Date:
June 01, 2023
Filing Date:
November 24, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
VOLKSWAGEN AG (DE)
CARIAD SE (DE)
International Classes:
G05D1/02; B60W60/00; G01S17/00
Foreign References:
US20170031363A12017-02-02
US20200255030A12020-08-13
US20210097303A12021-04-01
DE102020200875A12021-02-25
Attorney, Agent or Firm:
HOFSTETTER, SCHURACK & PARTNER PATENT- UND RECHTSANWALTSKANZLEI, PARTG MBB (DE)
Download PDF:
Claims:
39

CLAIMS:

1. Method to organize data traffic affecting a vehicle (1 ), particularly an autonomously driving vehicle (1 ), wherein the method comprises the following steps:

- providing (S2) environmental situation data (20) representing a situation of an environment of the vehicle (1 ), and vehicle situation data (21 ) representing a situation of the vehicle (1 );

- determining (S3) a resolution requirement information (28) by applying a resolution requirement determination algorithm (27) to the provided environmental situation data (20) and vehicle situation data (21 ) wherein the determined resolution requirement information (28) represents a required resolution of data; and

- providing (S4) data with a resolution according to the determined resolution requirement information (28).

2. Method according to claim 1 , wherein the environmental situation data (20) comprise at least one of the following data:

- illumination data (23) representing an illumination of the environment;

- weather data (24) representing a weather condition of the environment;

- environmental type data (25) representing a type of the environment;

- infrastructure data (26) representing an infrastructure element in the environment of the vehicle (1 ); and/or

- ground condition data (26) representing a condition of a ground on which the vehicle (1 ) is located.

3. Method according to any of the preceding claims, wherein the vehicle (1 ) captures the environmental situation data (20) and/or the vehicle situation data (21 ) by a sensor device (6) and/or receives the respective data from an external device (12a, 12b), particularly a server unit (12a) and/or another vehicle (12b). 40

4. Method according to claim 3, wherein the vehicle (1 ) captures and/or receives the environmental situation data (20) and/or the vehicle situation data (21 ) with a highest data resolution available.

5. Method according to claim 3, wherein the vehicle (1 ) captures and/or receives the environmental situation data (20) and/or the vehicle situation data (21 ) with a lowest data resolution available.

6. Method according to any of the preceding claims, wherein the determined resolution requirement information (28) depends on a kind of data.

7. Method according to any of the preceding claims, wherein the data provided with the resolution according to the determined resolution requirement information (28) comprises further environmental situation data (20’) and/or further vehicle situation data (21 ’).

8. Method according to claim 7, comprising:

- determining (S5) a further resolution requirement information (28’) by applying the resolution requirement determination algorithm (27) to the further environmental situation data (20’) and the further vehicle situation data (21 ’);

- verifying (S6) a deviation of the determined further resolution requirement information (28’) from the determined resolution requirement information (28);

- if the determined further resolution requirement information (28’) deviates from the determined resolution requirement information (28), providing (S7) data with a resolution according to the determined further resolution requirement information (28’).

9. Method according to any of the preceding claims, comprising:

- determining (S8) obstacle data (31 ) by applying an obstacle determination algorithm (29) to the provided data, wherein the 41 obstacle data (31 ) represent an obstacle (14a, 14b) in the environment of the vehicle (1 );

- determining (S12) an obstacle dependent resolution requirement information (28”) by applying the resolution requirement determination algorithm (27) to the determined obstacle data (31 ), wherein the determined obstacle dependent resolution requirement information (28”) represents a required resolution of provided data due to the determined obstacle (14a, 14b);

- providing (S13) data with a resolution according to the determined obstacle dependent resolution requirement information (28”). Method according to claim 9, comprising

- determining (S10) an obstacle significance information (33) by applying an obstacle assessment algorithm (32) to the determined obstacle data (31 ), wherein the obstacle significance information (33) represents a significance of the detected obstacle (14a, 14b) for a current state and/or driving route (13) of the vehicle (1 ), and

- only if the determined obstacle significance information (33) exceeds an obstacle significance threshold (34), determining (S11 ) the obstacle dependent resolution requirement information (28”). Method according to claim 10, comprising providing (S9) further data with an increased resolution compared to the provided data, and determining the obstacle significance information (33) by applying the obstacle assessment algorithm (32) to the determined further data. Method according to any of the claims 9 or 11 , wherein determining the obstacle dependent resolution requirement information (28”) comprises applying the resolution requirement determination algorithm (27) to environmental situation data (20) and vehicle situation data (21 ) provided with a resolution according to the determined resolution requirement information (28). Method according to any of the claims 9 to 12, comprising determining (S14) an operating information (36) for a longitudinal and/or transverse guidance of the vehicle (1 ) by applying an operating information determination algorithm (35) to the determined obstacle data (31 ) and operating (S15) the vehicle (1 ) according to the determined operating information (36) so that the vehicle (1 ) passes by or stops without collision with the determined obstacle (14a, 14b). Method according to any of the claims 9 to 13, comprising storing the determined obstacle data (31 ), particularly comprising a position information of the determined obstacle (14a, 14b), in a data storage unit (5) of the vehicle (1 ). Method according to any of the claims 9 to 14, comprising transmitting the determined obstacle data (31 ) to an external device (12a, 12b), particularly a server unit (12a) and/or another vehicle (12b). Method according to any of the preceding claims, wherein provision of the environmental situation data (20) and the vehicle situation data (21 ) requires at least one of the following events (S1 ):

- activation of the vehicle (1 ), particularly by operating a switch-on device of the vehicle (1 );

- activation of an autonomous driving mode of the vehicle (1 ); and/or

- a current speed of the vehicle (1 ) exceeds a speed limit, particularly specified for a road (2) on which the vehicle (1 ) is driving. Vehicle (1 ) comprising a control unit (4), wherein the vehicle (1 ) is configured to carry out a method according to any of the preceding claims. Computer program product stored in a control unit (4) of a vehicle (1 ) and/or an external device (12a, 12b) and comprising instructions which, when the program is executed by the control unit (4), cause the control unit (4) to carry out the method according to any of the claims 1 to 16.

19. A control unit (4) configured to carry out respective steps of the method according to any of the claims 1 to 16.

Description:
Method to organize data traffic affecting a vehicle

DESCRIPTION:

The invention is concerned with a method to organize data traffic affecting a vehicle, particularly an autonomous driving vehicle. Besides, the invention is concerned with a vehicle configured to carry out such a method, a computer program product to execute such a method, and a data processing device configured to carry out such a method.

An autonomous driving vehicle as well as a semi-autonomous driving vehicle may be affected by a large amount of data traffic, particularly sensor data traffic. The data traffic may originate from sensor data acquired by at least one sensor of the vehicle itself and/or may be provided to the vehicle by an external device, particularly a server unit and/or another vehicle.

DE 10 2020 200 875 A1 discloses a method to provide sensor data captured by a sensor device of a vehicle. Hereby, local processing areas of surroundings of the vehicle are differentiated according to a relevance of sensor data acquired in the respective local processing area.

It is an object of the present invention to improve organization of data traffic affecting a vehicle.

The object is accomplished by the subject matter of the independent claims. Advantageous developments with convenient and non-trivial further embodiments of the invention are specified in the following description, the dependent claims and the figures.

A first aspect of the invention is concerned with a method to organize data traffic affecting a vehicle. The vehicle is particularly an autonomous driving vehicle. In other words, the vehicle is particularly configured to operate a longitudinal and transverse guidance of the vehicle, meaning a drive, brake and steering system of the vehicle, autonomously. Alternatively or additionally, the vehicle is preferably configured to drive at least semi- autonomously. The vehicle is, for example, a motor vehicle, such as a passenger car, a truck, a motorcycle and/or a bus. Data traffic is preferably all data transmitted and/or received by a control unit of the vehicle, wherein data may be transferred internally and/or between the vehicle and an external device.

The inventive method is based on the observation that an autonomous driving vehicle is typically affected by a large amount of data traffic within the vehicle and/or between the vehicle and the external device. The data may be sensor data and/or processed data and/or the data may be generated by data processing for autonomous driving. Therefore, it is essential to provide a robust data processing system for the vehicle which can deal with the large amount of data traffic. This implies an optimization of performances and mitigation of latencies to ensure a timely reaction of the vehicle.

Such an optimization of data traffic within the autonomous driving vehicle may be based on a situation-dependent optimization of an array of different sensor devices of the vehicle in order to reduce not required data traffic. Each of the sensor devices may have a specific and preferably individual resolution and thus different accuracy specifications. For example, the autonomous driving vehicle can capture or collect sensor data by a low resolution sensor and/or process data with a fast data processing algorithm with low accuracy during the day and in good weather conditions because the respective quality of data may be sufficient due to the advantageous light and weather conditions. Contrarily, during the night and in worse weather conditions, such as rain, the autonomous driving vehicle may require data of higher quality and thus collect sensor data with higher resolution by, for example, another sensor device of the vehicle providing higher resolution sensor data, and/or process data with a slower processing algorithm and/or an equally fast algorithm with higher accuracy. This means that data traffic may be increased or decreased depending on the current situation and particularly a current requirement for data quality.

The inventive method is preferably a computer implemented method. The inventive method comprises providing environmental situation data and vehicle situation data. The environmental situation data represent a situation of an environment of the vehicle. The vehicle situation data represent a situation of the vehicle. Data in the sense of this invention is electronic data representing a defined kind of information and comprising, for example, one or multiple data elements. The environmental situation data comprise, for example, information on a current weather condition, such as sunny, cloudy, foggy, rainy or windy. The environmental situation data represents hence a state of a surrounding area of the vehicle that may determine an environmental condition faced by the vehicle. The vehicle situation data comprise information on a current state of the vehicle, for example, a current state of motion distinguishing between stopped, parked and/or driving. The stopped vehicle is, for example, a vehicle waiting in front of a red traffic light and/or a railway crossing. Furthermore, the vehicle situation data may represent a current driving speed of the vehicle so that the vehicle situation data in different for a slowly moving vehicle in a residential area and a faster moving vehicle driving on a highway. The environmental situation data and the vehicle situation data are provided to a control unit of the vehicle, such as an electronic control unit (ECU). The control unit is preferably a data processing device, such as a processor and/or a computer.

Moreover, the method comprises determining a resolution requirement information. The resolution requirement information represents a required resolution of data, preferably sensor data. In particular, it represents the required resolution of sensor data required for the vehicle. The resolution requirement information is particularly a value representing the required resolution, wherein the value may be between 0 for comparatively low resolution and 1 for comparatively high resolution.

The determination of the resolution requirement information is achieved by applying a resolution requirement determination algorithm to the provided environmental situation data and vehicle situation data. The resolution requirement determination algorithm comprises instructions which can be executed by a computer, for example, the control unit of the vehicle. As the resolution requirement determination algorithm is applied to the environmental situation data and the vehicle situation data, the determined resolution requirement information depends on the situation of the environment of the vehicle and the situation of the vehicle itself. If, for example, the environmental situation data indicate sunny weather and the vehicle situation data indicate that the vehicle is currently driving with low speed in a low-traffic area, data with a low resolution is sufficient for the vehicle. Contrarily, if the environmental situation data indicate rainy weather and the vehicle situation data indicate that the vehicle is driving in an urban area with a high vehicle density, data with a higher resolution are required, particularly in case of an autonomously driving vehicle. These different situations of the environment of the vehicle as well as the vehicle itself may be understood as different scenarios experienced by the vehicle.

As provided data may be processed data, resolution of data can be influenced by choosing a specific processing algorithm applied in order to process raw data, for example, for an autonomous driving function of the vehicle. Such a function is for example a lane assist that generates operating data for the steering system of the vehicle as processed data based on, for example, camera data of the front camera of the vehicle. It is possible that not only the resolution of the camera data are selectable by determining the resolution requirement information but also a resolution of the processing algorithm that determines the operating data based on the camera data as raw data may be adjustable to different resolution requirements. For example, a calculation speed, a precision of determined processed data and/or a required processing capacity of the algorithm may differ between an algorithm that provides processed data with high resolution and another algorithm that provides processed data with a comparatively lower resolution. The control unit may decide which algorithm to apply on respective data based on the determined resolution requirement information.

The required resolution is preferably a resolution demanded by the vehicle to ensure at least a basic functionality of the vehicle. The required resolution of data, preferably sensor data, may depend on a driving mode of the vehicle, meaning that an autonomous driving vehicle requires typically a higher resolution of sensor data compared to semi-autonomous driving vehicle or a manually driven vehicle. The driving mode of the vehicle is preferably comprised by the vehicle situation data, meaning that the vehicle situation data may represent a driving mode of the vehicle as the driving mode represents the situation of the vehicle.

The method comprises providing data with the resolution according to the determined resolution requirement information. As a result of the described analysis of the situation of the environment of the vehicle as well as the situation of the vehicle itself, a situation-dependent resolution requirement information is hence determined and further considered, which may reduce data traffic affecting the vehicle to a minimum only if this is reasonable in the current situation. The provided data are preferably sensor data. The provided data is, for example, captured by a sensor device of the vehicle, for example, a sensor such as a front camera configured to capture static or moving image data of the environment of the vehicle in a front area of the vehicle. Alternatively of additionally, the provided data depend on an applied processing algorithm with which, for example, captured sensor data are processed in order to provide autonomous driving of the vehicle.

The environmental situation data and the vehicle situation data are preferably provided for the control unit of the vehicle. The control unit determines the resolution requirement information and at least transmits an operating signal representing the determined resolution requirement information to, for example, the sensor device of the vehicle. The sensor device can then provide sensor data with the resolution according to the determined resolution requirement information for the control unit by operating according to the operating signal. The provided data are as well provided for the control unit.

The described method provides better organization of data traffic affecting the vehicle because by adjusting the resolution of provided data according to the situation of the environment of the vehicle as well as the situation of the vehicle itself, the amount of data are always chosen according to a required data traffic amount. This is the case because high resolution data typically results in higher data traffic whereas lower resolution data results in less data traffic. Therefore, by adjusting the resolution of data, meaning by determining a current resolution requirement information, at least sensor data traffic affecting the vehicle can be easily reduced and hence be better organized. The described method provides organization of data traffic so that reliability and availability of an autonomous driving function of the vehicle is guaranteed while a probability of data overflow due to a large amount of data traffic is reduced. In summary, organization of data traffic affecting a vehicle is optimized.

An embodiment comprises that the environmental situation data comprise at least one of the following data: Illumination data, weather data, environmental type data, infrastructure data and/or ground condition data.

Illumination data represent an illumination of the environment. Illumination data may, for example, differentiate between times of daylight and night, and/or consider brightness of external light sources such as sunlight, street lightning and/or light sources of the vehicle itself, another vehicle in the environment of the vehicle, a building and/or another infrastructure element.

Weather data may represent a weather condition of the environment of the vehicle. Weather data can, for example, differ between sunny, cloudy, foggy, rainy and/or windy. Weather data is, for example, determined by the control unit, for example, based on sensor data of a sensor device of the vehicle, such as a camera, a temperature sensor and/or a rain sensor. Alternatively or additionally, weather data are transmitted to the vehicle by the external device.

Environmental type data represent a type or kind of the environment of the vehicle. A type of environment is, for example, urban or rural. More precisely, the environmental type data may represent a city, a residential area, a countryside, a desert, a forest, a coastline, a snowcapped landscape and/or a mountainous terrain.

Infrastructure data represent an infrastructure element in the environment of the vehicle, for example, a traffic light in front of which the vehicle is currently stopped. The infrastructure element is preferably an infrastructure, such as the traffic light, a traffic sign, a parking space and/or a parking garage.

Ground condition data represent a condition of a ground on which the vehicle is located. In other words the ground is, for example, a street, a road or a parking space on which the vehicle is currently driving or positioned. Ground condition comprises, for example, information on a roughness of a pavement on which the vehicle is located, meaning driving and/or positioned. Ground condition data may comprise a pothole density and/or details on a road surface. Ground condition data is, for example, captured by a vibration sensor of the vehicle.

The described environmental situation data all contribute to precisely represent the situation of the environment of the vehicle. Typical factors having an impact on required resolution of sensor data are covered, particularly illumination and type of environment. In combination with the vehicle situation data it is hence possible to determine precisely the resolution requirement information. A further embodiment comprises that the vehicle captures the environmental situation data and/or the vehicle situation data by a sensor device. The sensor device of the vehicle is preferably a camera, particularly a front camera, a rear camera, a side camera, a Lidar device, an infrared sensor, a temperature sensor, a rain sensor, a vibration sensor and/or a radar device.

Alternatively or additionally, the vehicle receives the respective data from the external device, particularly the server unit and/or another vehicle. Therefore, the vehicle typically comprises a communication interface to establish a wireless communication connection to the external device. This connection can be a vehicle-to-infrastructure and/or a vehicle-to-vehicle communication connection. The wireless communication connection may be a wireless local area network (WLAN), a Bluetooth connection and/or a mobile data network. The mobile data network is preferably based on a technology standard for broadband cellular networks, such as long-term evolution (LTE), long-term evolution advanced (LTE-A), fifth generation (5G) or sixth generation (6G). The received data is, for example, data captured by a sensor device of another vehicle, which may include camera data of the vehicle captured by the other vehicle to determine the vehicle situation data of the vehicle. Alternatively or additionally, the respective data are provided by the external server unit, which may forward data received from at least one other vehicle and or provide processed data based on received from the at least one other vehicle. The received data may comprise information on traffic density, weather condition and/or at least one traffic jam.

Therefore, precise environmental and/or vehicle situation data are provided resulting is precise determination of the resolution requirement information.

Besides, an embodiment comprises that the vehicle captures and/or receives the environmental situation data and/or the vehicle situation data with the highest data resolution available. This means that the data based on which the resolution requirement information is determined is received with a particularly high resolution, meaning the highest data resolution possible. Preferably for a predefined time interval, for example, several seconds after activation of the vehicle, the environmental situation data and/or the vehicle situation data are captured with at least one sensor device, wherein the captured data has the highest possible data resolution. The highest available data resolution depends on the sensor device that captures the data. Alternatively or additionally, the environmental situation data and/or the vehicle situation data are received by the vehicle, wherein particularly high resolution data are received and/or the respective data are received with a particularly high data reception rate. As a result, precise determination of the resolution requirement information is possible, wherein a large amount of data traffic may occur for the determination.

Alternatively, an embodiment comprises that the vehicle captures and/or receives the environmental situation data and/or the vehicle situation data with the lowest data resolution available. Instead of using high precision data and risking a large amount of data traffic to determine the resolution requirement information, it is hence possible to base the determination of the resolution requirement information on low resolution data. Therefore, a minimum data traffic occurs which saves processing resources of the control unit. This is based on the idea that no high resolution data are required to determine the situation of the environment of the vehicle as well as the situation of the vehicle itself but low resolution data are sufficient for this. It is, for example, possible that illumination data comprised by the environmental situation data are captured by a camera device of the vehicle. It is then possible to reduce resolution of the camera device to the lowest data resolution available to provide at least rough information on the environmental situation and/or the vehicle situation which may be detailed enough to determine the resolution requirement information. The method is therefore easily applicable because it has a starting situation with only a small amount of data traffic to save energy and computing power requirement.

According to another embodiment, the determined resolution requirement information depends on a kind of data. It is, for example, possible to choose individually which kind of data are to be captured or received with a specific resolution. The resolution requirement may vary depending on the data, meaning that it may be data-dependent. It is, for example, possible to determine a resolution requirement information that comprises a camera resolution requirement for a camera device of the vehicle and a Lidar device resolution requirement for the Lidar device of the vehicle. In this example, the resolution requirement information may require collection of low resolution camera data by the camera and of comparatively higher resolution Lidar data by the Lidar device. Such a data-dependent determination of the resolution requirement information is, for example, particularly reasonable in case of good weather conditions, for example, at daytime in a sunny urban environment, because in this scenario, for example, distance measurements to surrounding vehicles and/or objects are important so that Lidar data are captured with high resolution, whereas, for example, the resolution of the camera can be reduced due to the good weather conditions resulting in good sight and hence no requirement for particularly high resolution data. Compared to this, in case of rain, the camera resolution requirement may be increased so that both Lidar data as well as camera data are required with high resolution, preferably with the highest resolution possible, in order to provide sufficiently good camera data in spite of the bad weather conditions. This allows for particularly detailed organization of data traffic.

The resolution of the provided data according to the determined resolution requirement information preferably varies between the lowest and the highest resolution available. It is possible to provide multiple variations of resolution between the lowest and the highest resolution available. The resolutions in between, meaning the intermediate resolutions, can vary continuously or gradually.

According to another embodiment, data provided with the resolution according to the determined resolution requirement information comprise further environmental situation data and/or vehicle situation data. The provided data represents hence the current situation of the environment and the vehicle. In other words, data required to determine the resolution requirement information are provided even after the resolution requirement information has already been determined at least once. Preferably, the provided data is used to provide autonomous driving of the vehicle. This allows that at least in principle a further determination of the resolution requirement information is possible so that the resolution of the provided data may be adapted to a changing situation of the environment and/or the vehicle.

Preferably, the described provided data is provided continuously or in predefined time intervals of, for example 1 second, 2 seconds, 3 seconds, 5 seconds, 10 seconds, 30 seconds, 1 minute, 2 minutes, 3 minutes, 5 minutes or 10 minutes. The time interval may be between, for example, 1 second and 10 minutes, 3 seconds and 5 minutes, 5 seconds and 3 minute or 10 seconds and 1 minute. It is furthermore possible to provide the described data at a time interval between the listed time intervals.

Besides, an embodiment comprises determining a further resolution requirement information by applying the resolution requirement determination algorithm to the further environmental situation data and the further vehicle situation data. The resolution requirement information is thus determined at least twice. However, data provided after the first determination of the resolution requirement information to determine this information for the at least second time are provided with a resolution according to the first determined resolution requirement information. This provides a continuous monitoring of the situation of the environment of the vehicle as well as the situation of the vehicle itself, for example, in order to reevaluate the current resolution requirement information. It is therefore easily possible to, for example, repeat the above-described method based on data provided with a resolution according to the determined resolution requirement information.

The method furthermore comprises verifying a deviation of the determined further resolution requirement information from the determined resolution requirement information. It is hence evaluated whether the environmental situation or the vehicle situation has changed since determination of the first resolution requirement information. If a change in, for example, environmental situation occurred, for example, because the vehicle has left the urban environment and is now located in a rural environment with little traffic compared to the urban environment, the determined resolution requirement information changes, for example, to a further resolution requirement information with a lower resolution requirement compared to before.

The method comprises furthermore providing data with a resolution according to the determined further resolution requirement information, if the determined further resolution requirement information deviates from the determined resolution requirement information. The two resolution requirement information deviate from each other if at least the resolution requirement for data provided by a specific sensor device of the vehicle and/or a kind of data captured by the vehicle or provided for the vehicle is required with a different resolution than the resolution requirement information determined earlier specifies.

Alternatively or additionally, a minimal deviation percentage is given as a tolerance and only if the further resolution requirement information deviates from the previously determined resolution requirement information by a percentage that is higher than the minimal deviation percentage a deviation between the two resolution requirement information is determined. Thereby, small deviations between the resolution requirement information are not taken into account and are in the following ignored, meaning that further data provision is continued with the resolution according to the previously determined resolution requirement information. In other words, it is possible to continuously monitor the resolution requirement information and reevaluate it so that the method can be quickly and easily adapted to the current situation at any time.

According to another embodiment, the method comprises determining obstacle data by applying an obstacle determination algorithm to the provided sensor data. The obstacle data represent an obstacle in the environment of the vehicle. The obstacle can be, for example, a human being, meaning a person, crossing a road on which the vehicle is currently driving. Alternatively or additionally, the obstacle data may represent a plastic bag or another object located on the road on which the vehicle is driving and/or on a side of the road. In general, the obstacle is an object that is detected by the sensor devices of the vehicle and/or the received data. Data provided to determine the obstacle data are preferably camera data, Lidar data and/or radar data. The obstacle can be another vehicle, for example, a vehicle at the end of a traffic jam towards which the vehicle is heading. The obstacle can alternatively or additionally be a road construction site. The obstacle determination algorithm is preferably based on methods of digital image processing and comprises preferably constructions to detect and identify the obstacle. This means that the obstacle determination algorithm comprises instructions which can be executed by, for example, the control unit of the vehicle. By executing the instructions of the obstacle determination algorithm, preferably sensor data are analyzed in a way that an obstacle is recognized and identified within the environment of the vehicle.

The method comprises determining an obstacle dependent resolution requirement information by applying the resolution requirement determination algorithm to the determined obstacle data. The determined obstacle dependent resolution requirement information represents a required resolution of provided data due to the determined obstacle. In other words, it is possible to change the resolution of the provided data as a consequence of the detected obstacle. This is particularly relevant if, for example, the obstacle is representing the person crossing the road. In this case, the person is a potential obstacle which may be carefully observed by, for example, the front camera of the vehicle in order to detect, for example, the detailed movement the person across the road on which the vehicle is currently driving. Although, for example, so far according to the resolution requirement information only low resolution camera data was required, due to the detected obstacle the obstacle dependent resolution requirement information may differ from this previous resolution requirement information to provide higher resolution data of the obstacle. The person can then be better observed and, for example, be further analyzed. The method comprises furthermore providing data with the resolution according to the determined obstacle dependent resolution requirement information. This means that after detecting the obstacle and observing a necessary change in required resolution of data, the method may quickly adapt to the newly determined obstacle dependent resolution requirement information in order to situation-adequately react to the observed obstacle.

Moreover, an embodiment comprises determining an obstacle significance information by applying an obstacle assessment algorithm to the determined obstacle data. The obstacle significance information represents a significance of the detected obstacle for a current state and/or driving route of the vehicle. The obstacle assessment algorithm hence comprises instructions which, when executed by, for example, the control unit of the vehicle, allow a classification of the determined obstacle. This means that the method can differentiate between obstacles which have high impact or at least potentially high impact on the vehicle, particular the driving route as well as the current state of the vehicle, and other obstacles which have little or no such impact. An obstacle with high impact could be, for example, the person crossing the road in front of the autonomous driving vehicle. Contrarily, an obstacle with low impact on the vehicle could be the plastic bag or another small object located on the road or at a side of the road. The reason for this is that driving over the plastic bag, for example, is considered to not be critical for the vehicle so that this obstacle does not influence the current state and/or the driving route of the vehicle.

The current state is, for example, represented by vehicle situation data. The driving route is typically a predetermined route of the vehicle from a starting position and/or a current position of the vehicle to a destination of the vehicle. The driving route is typically provided by a navigation system of the vehicle. If the obstacle is, for example, a road construction site it may be reasonable or even necessary to change the current route in order to drive around the road construction site. Therefore, the obstacle may have a significant impact on the current driving route. In case of the person crossing the road, an emergency stop of the vehicle may be reasonable and even necessary, meaning that a current state of the vehicle may change from driving to stopped, meaning that such an obstacle has significant impact on the current state of the vehicle.

Only if the determined obstacle significance information exceeds an obstacle significance threshold, the method comprises determining the obstacle dependent resolution requirement information. This means that, for example, the low impact obstacle such as the plastic bag on the road is only detected by the vehicle but does not result in a reevaluation of the resolution requirement information, meaning that no obstacle dependent resolution requirement information is determined due to the little significance of the plastic bag as obstacle to the vehicle state and/or the driving route of the vehicle. On the other hand, the person crossing the road in front of the vehicle represents an obstacle of high significance for the current state and/or the driving route of the vehicle. In this situation, the obstacle dependent resolution requirement information is determined as described above. Therefore, for example, the current resolution requirements for the front camera of the vehicle is preferably immediately increased to, for example, the highest resolution available in order to provide detailed and accurate sensor data, particularly high resolution camera data, of the person crossing the road. This reduces the calculation capacity and helps organizing data traffic affecting the vehicle particularly situation-dependent since only if the detected obstacle has a sufficient significance for the current state and/or the driving route of the vehicle the obstacle dependent resolution requirement information is determined. Therefore, in case of obviously little impact obstacles in the environment of the vehicle no additional and time and resource consuming reevaluation of the resolution requirement information is performed.

The obstacle significance threshold is a predefined value, which is typically stored in the control unit that executes the determination of the obstacle significance information. The obstacle significance threshold provides a decision-making tool to decide whether a determined obstacle has sufficient significance for the current state and/or the driving route of the vehicle or not. Moreover, an embodiment comprises providing further data with an increased resolution compared to the resolution of data provided so far. The method further comprises determining the obstacle significance information by applying the obstacle assessment algorithm to the determined further data. This means that after determining the obstacle data the resolution of all further provided data is preferably immediately increased. This means that, for example, the front camera which has been providing data with a low resolution now changes its resolution to a higher resolution. Alternatively or additionally, a used sensor device can be deactivated and another sensor device configured to provide preferably the same kind of data can be activated and from now on provide the respective data. Such a sensor device swap may be useful to easily provide distance data with different resolution, for example, if the vehicle comprises an infrared proximity sensor with low resolution and a Lidar device with comparatively higher resolution. Alternatively or additionally, the further data are provided by the external device.

In order to always get precise information on the significance of an obstacle for the vehicle, the obstacle assessment algorithm is hence not simply applied to the data provided with any resolution according to the resolution requirement information or the further environmental resolution requirement information, but to data with the increased resolution, preferably the highest resolution available. Therefore, the determination of the significance of the obstacle is particularly precise since it is based on more precise data compared to data typically provided in the current situation of the environment of the vehicle as well as the vehicle itself.

A further embodiment comprises that determining the obstacle dependent resolution requirement information comprises applying the resolution requirement determination algorithm to environmental situation data and vehicle situation data provided with the resolution according to the determined resolution requirement information. This means that the resolution requirement determination algorithm is not just applied on the obstacle data but also on determined environmental situation data and vehicle situation data which are continuously provided, for example, by sensor data with the resolution according to the determined resolution requirement information. Alternatively or additionally, it is possible to apply the resolution requirement determination algorithm to environmental situation data and vehicle situation data provided by further data, meaning by data provided with a resolution that may has already been increased or decreased compared to earlier provided data. In other words, the obstacle dependent resolution requirement information is determined based on current data, for example, concerning the environment of the vehicle as well as the vehicle itself and not on, for example, the environmental situation data and vehicle situation data which were already provided at the beginning of the described method before the resolution requirement information was determined for the first time. Therefore, the obstacle dependent resolution requirement information always considers the current situation of the environment of the vehicle as well as the current situation of the vehicle and is therefore particularly precise and accurate.

Moreover, another embodiment comprises determining an operating information for a longitudinal and/or transverse guidance of the vehicle. The operating information is determined by applying an operating information determination algorithm to the determined obstacle data. Additionally or alternatively, the operating information determination algorithm is applied to the further data and/or the environmental situation data and vehicle situation data provided with the resolution according to the determined resolution requirement information.

The operating information determination algorithm comprises instructions to decide whether a route correction is needed due to the determined obstacle or not. The operating information determination algorithm can determine operating parameters for both a drive and/or brake system of the vehicle, meaning its longitudinal guidance, and/or a steering system of the vehicle, meaning the transverse guidance of the vehicle. The method comprises furthermore operating the vehicle according to the determined operating information so that the vehicle passes by or stops without collision with the determined obstacle. It is, for example, possible to determine an emergency stop route according to which the vehicle, for example, stops in front of the person crossing the road in front of the vehicle if this person is the detected obstacle. Whether the operating information is determined or not is preferably dependent on the obstacle significance information meaning that the operating information is preferably only determined if the obstacle significance information exceeds the obstacle significance threshold. In case of, for example, a road construction site in front of the vehicle as an obstacle, the vehicle preferably determines an alternative driving route that bypasses the road construction site as obstacle. In this case, the vehicle that is driven according to the operating information passes by the determined obstacle without collision with the determined obstacle.

The obstacle data comprise preferably an information on location and size of the obstacle. Therefore it is easily possible to determine the alternative route information for the vehicle that comprises the passing by or the stop without collision with the determined obstacle. Preferably, the operating information is determined at least partially by contribution of the navigation system of the vehicle. In case the obstacle has significant impact on further operating of the vehicle, it is therefore possible to automatically generate the operating information to provide a quick and reliable reaction of the vehicle to the determined obstacle.

Besides, an embodiment comprises storing the determined obstacle data in a data storage unit of the vehicle. The obstacle data particularly comprise position information of the determined obstacle. It is hence possible to store the information on the obstacle within the vehicle so that, for example, in a future situation with, for example, the same or similar obstacle, for example, at the same location the vehicle already has respective obstacle data stored. This may result in an early and reliable obstacle detection as well as determination of obstacle significance. It is furthermore according to an embodiment provided that the method comprises transmitting the determent obstacle data to an external device, particularly a server unit and/or another vehicle. The obstacle data then preferably also comprise a position information of the determined obstacle. It is hence possible to pass the information on the detected obstacle on to another vehicles which can then, for example, early on and in advance react to the obstacle. For example in case of the detected road construction site as an obstacle, the other vehicle is informed about this obstacle so that it can, for example, in advance change or adapt its driving route to pass by the road construction site without collision with it. A reasonable transfer of determined obstacle data to the other vehicle directly or to the server unit is thus provided.

According to an additional embodiment, provision of the environmental situation data and the vehicle situation data requires at least one of the following events: Activation of the vehicle, particularly by operating a switchon device of the vehicle; activation of an autonomous driving mode of the vehicle; and/or a current speed of the vehicle exceeds a speed limit, particularly specified for a road on which the vehicle is currently driving. It is hence necessary to have an initial trigger in order to start the abovedescribed method. This means that providing environmental situation data and as well as vehicle situation data only occurs if at least one of the named events has been observed. The switch-on device of the vehicle is preferably a general switch-on and/or switch-off device for the vehicle which is typically positioned in vicinity of a steering wheel of the vehicle.

If, for example, the vehicle is driving on a road with a speed limit of 50 kilometer per hour and it is suddenly observed that the vehicle drives faster, for example, at 70 kilometer per hour, optimization of data traffic can be activated. Alternatively or additionally a speed limit as trigger event may be preset to for example 10 kilometer per hour, 20 kilometer per hour or 30 kilometer per hour. In general, the method described does not simply start at a random point in time but its start is caused by at least one of the named events. Therefore, the conditions under which the described organization of data traffic affecting the vehicle occurs can be dependent on a specific event and is therefore controllable by, for example, the user of the vehicle if, for example, the user defines a wanted trigger event which he or she specifies, for example, by respective manual setting of vehicle settings of the vehicle. Such manual settings may be performed by manual operation of a control element of the vehicle and/or a mobile device connected to the vehicle.

The control unit can be a processing unit that may comprise one or more microprocessors and/or one or more microcontrollers. Further, the control unit may comprise program code that is designed to perform the method when executed by the control unit. The program code may be stored in a data storage unit of the control unit, which can be referred to as data storage of the control unit.

Another aspect of the invention is concerned with a vehicle comprising a control unit. The vehicle is configured to carry out a method according to the described method above. More precisely, the vehicle is configured to carry out all steps of the described method which are intended to be performed by the vehicle (and not, for example, by the external device).

Alternatively, the control unit of the vehicle only receives, for example, the resolution requirement information but does not determine it. Therefore, another aspect of the invention comprises a data processing device, which is preferably a control unit of the external device, such as the server unit. In this case, the data processing device is configured to carry out the respective steps of the described method. In particular, the vehicle and/or the data processing device are configured to carry out a method according to an embodiment of the described method or a combination of embodiments of the described method. Another aspect of the invention is concerned with a computer program product. The computer program product may comprise program code. The computer program product is stored in the control unit of the vehicle and/or the external computer such as the data processing device of the server unit. Preferably, it is stored in a data storage unit of the control unit. The control unit can execute a method as described above. In other words, the computer program product comprises instructions which, when the program is executed by the control unit or the data processing device cause the control unit or data processing device to carry out the respective steps of the described method. These steps are steps comprising, for example, determination of specific information. This means that, for example, a step simply providing sensor data or other data which are captured by a sensor and/or received from another vehicle are always steps of the method that are carried out by a vehicle or the external device. However, all purely computational steps which may be carried out by a computer or executed by a computer are performed preferably by the control unit of the vehicle and/or the data processing device of the external device. In particular, the computer program product is configured to carry out a method according to an embodiment of the described method or a combination of embodiments of the described method.

The invention also comprises embodiments that provide features which afford additional technical advantages.

The invention also comprises the combinations of the features of the different embodiments.

In the following an exemplary implementation of the invention is described. The figures show:

Fig. 1 a schematic drawing of a vehicle driving on a road;

Fig. 2 a schematic representation of a method to organize data traffic affecting a vehicle; Fig. 3 a schematic representation of a method to verify a current resolution requirement information;

Fig. 4 a schematic representation of a method to determine an obstacle in the environment of the vehicle; and

Fig. 5 a schematic representation of a method comprising consequences of a determined obstacle.

The embodiment explained in the following is a preferred embodiment of the invention. However, in the embodiment, the described components of the embodiment each represent individual features of the invention which are to be considered independently of each other and which each develop the invention also independently of each other and thereby are also to be regarded as a component of the invention in individual manner or in another than the shown combination. Furthermore, the described embodiment can also be supplemented by further features of the invention already described.

In the figures identical reference signs indicate elements that provide the same function.

Fig. 1 shows a vehicle 1 which is currently driving on a road 2. The road 2 is located within a city 3 meaning that the vehicle 1 is driving in an urban area. The ride of the vehicle 1 takes place at daytime and at sunny weather. The vehicle 1 comprises a control unit 4. The control unit 4 comprises a processor, particularly an electronic processing unit (ECU). Alternatively, the control unit is referred to as a data processing device. The control unit 4 is in other words a computer which is configured to execute a computer implemented method. Besides, the vehicle 1 comprises a data storage unit 5. The data storage unit 5 is comprised by the control unit 4 or configured as an individual component of the vehicle 1 . The vehicle 1 further comprises at least one sensor device 6. In this example, the multiple sensor devices 6 are a camera 7, a Lidar device 8 (alternatively referred to as Lidar), an infrared proximity sensor 9 and a temperature sensor 10. Both the Lidar device 8 as well as the infrared proximity sensor 9 are configured to measure a distance between the vehicle 1 and an object, for example, one of the surrounding buildings of the city 3. The vehicle further comprises a communication interface 11 .

Fig. 1 shows two possible external devices 12a, 12b which are an external server unit 12a and multiple other vehicles 12b. The other vehicles 12b are driving on another lane of the road 2 compared to vehicle 1. The external devices 12a, 12b are connectable to the vehicle 1 via communication connections which are sketched as dashed lines in Fig. 1. These communication connections are wireless and connect the communication interface 11 of the vehicle 1 with the communication interface 1 1 of the server unit 12a and/or the communication interface 11 of one of the other vehicles 12b. The server unit 12a comprises as well a control unit 4.

In driving direction in front of the vehicle 1 , two obstacles 14a, 14b are located. The first obstacle 14a is a plastic bag lying on the lane on which the vehicle 1 is currently driving. The other obstacle 14b is a person that is crossing the road 2. Both obstacles 14a, 14b are on a current route 13 of the vehicle 1 . In case the vehicle 1 intends to perform an emergency stop in front of the person as obstacle 14b, a stop area 15 of the route 13 is sketched in Fig. 1.

Fig. 2 shows basic steps of a method, particularly a completely implemented method, to organize data traffic affecting the vehicle 1. The vehicle 1 is particularly an autonomous driving vehicle that is currently driving on the road 2 at daytime and at sunny weather conditions.

In a step S1 one of the following events takes place: Activation of the vehicle 1 , particularly by operating a switch-on device of the vehicle 1 ; activation of an autonomous driving mode of the vehicle 1 ; and/or a current speed of the vehicle 1 exceeds a speed limit, particularly specified for the road 2 on which the vehicle 1 is currently driving. Alternatively, the set speed limit can be set manually or is preset for the vehicle. The set speed limit is, for example, 10 kilometer per hour, 20 kilometer per hour, 30 kilometer per hour, 50 kilometer per hour, 70 kilometer per hour, 100 kilometer per hour and/or 120 kilometer per hour. Alternatively, the set speed limit can be any speed limit between the named speed limits.

After this initial trigger due to one of the named events in step S1 , a step S2 is performed. Step S2 comprises providing environmental situation data 20 and vehicle situation data 21 . The environmental situation data 20 represents a situation of an environment of the vehicle 1. The environment is a surrounding area of the vehicle 1. The boundaries of the environment are typically determined by the sensor devices 6 of the vehicle 1 , meaning that the environment of the vehicle 1 typically reaches at least as far as a coverage area of, for example, the camera 7 and/or the Lidar device 8 as sensor devices 6. This means that the environmental situation data 20 represent a current state of the surroundings of the vehicle 1 .

In particular, the environmental situation data 20 comprise illumination data 22, weather data 23, environmental type data 24, infrastructure data 25 and/or ground condition data 26. Illumination data 22 represent an illumination of the environment. In this example, the illumination data 22 describe that it is currently daytime and therefore the vehicle 1 drives in an illuminated environment. Illumination data 22 can alternatively represent nighttime, meaning that there is only artificial illumination of the environment, for example, due to a streetlight and/or lighting of the other vehicle 12b and/or the surrounding infrastructure of the city 3.The weather data 23 represent a weather condition of the environment. In the example of Fig. 1 , the weather data 23 represent the sunny weather, meaning that there is, for example, currently no rain and it is currently preferably not cloudy. The environmental type data 24 represent a type of the environment. In this example the environment is an urban environment due to the location of the road 2 within the city 3. Alternatively, the environmental type data 24 can represent a rural area, a village, a desert, a forest, a coastline and/or a snowcapped winter landscape. The infrastructure data 25 represent an infrastructure element in the environment of the vehicle 1 . In this example the infrastructure data 25 represent the multiple buildings of the city 3 located next to the road 2. The ground condition data 26 represent a condition of a ground on which the vehicle 1 is located. This means the ground condition data 26 represent the condition of, for example, the road 2 on which the vehicle 1 is currently driving. In case of a parked or stopped vehicle 1 the ground condition data 26 represent a condition of the ground on which the vehicle 1 is positioned. In case of the road 2 within the city 3 the ground condition data 26 represent a smooth road surface of the road 2. Alternatively, ground condition data 26 can comprise a density of potholes or a roughness of the road surface, for example, in case of a gravel road or cobblestone pavement.

The vehicle situation data 21 represents the situation and hence preferably a current state of the vehicle 1. The vehicle situation data 21 distinguishes between a driving vehicle 1 , a parked vehicle 1 and/or a stopped vehicle 1 . The vehicle 1 is, for example, stopped due to a traffic light or one of the obstacles 14a, 14b along its route 13.

The environmental situation data 20 and the vehicle situation data 21 may be provided by the vehicle 1 itself and/or are received by the vehicle 1 . This means that the vehicle 1 either captures the environmental situation data 20 and/or the vehicle situation data 21 by at least one of the sensor devices 6. Alternatively or additionally, the vehicle 1 receives the respective data, meaning the environmental situation data 20 and/or the vehicle situation data 21 , from one of the external devices 12a, 12b. It is possible that the vehicle 1 captures and/or receives the environmental situation data 20 and/or the vehicle situation data 21 with the highest data resolution available. Alternatively, it can receive or capture the respective data, meaning the environmental situation data 20 and/or the vehicle situation data 21 , with the lowest data resolution available. If it is the highest data resolution available or the lowest data resolution available depends on preferably preset settings within the vehicle 1 .

A further step S3 comprises applying a resolution requirement determination algorithm 27 to the provided environmental situation data 20 and vehicle situation data 21. This is done to determine a resolution requirement information 28 which represents a required resolution of data. Preferably, the determined resolution requirement information 28 depends on a kind of data. This means that, for example, camera data can be required with low resolution while, for example, Lidar data collected by the Lidar device 8 is required with a comparatively higher resolution. In this example of vehicle 1 driving through the city 3 at daytime and sunny weather, a high resolution of, for example, sensor data 29 can be required due to the large amount of traffic within the city 3. If however under the same illumination and weather conditions, the vehicle 1 drives in a rural area with little traffic, the requirements for resolution of sensor data 29 may be reduced. However, if the weather condition and illumination condition according to the weather data 23 or the illumination data 22, respectively, decrease, meaning that the vehicle 1 is, for example, driving at nighttime and rainy weather, the resolution requirements may increase due to worse or more difficult conditions for the respective sensor device 6. In other words, during step S3 the current situation of the environment of the vehicle 1 as well as the situation of the vehicle 1 itself is examined in order to determine what kind and especially what quality of resolution is currently at least required in order to, for example, continue providing autonomous driving of the vehicle 1 .

Preferably, the required resolution of data, particularly sensor data 29, according to the determined resolution requirement information 28 varies between the lowest and the highest resolution available. It is possible that each individual sensor device 6 can switch its resolution. This is, for example, the case for the camera 7, which can switch between higher and lower resolution modes, particularly between multiple modes between the highest resolution mode available and the lowest resolution mode available. However, it is alternatively or additionally possible that at least one sensor device 6 only has one specific resolution mode. In this case, for example, the Lidar device 8 is configured for high resolution distance data whereas the infrared proximity sensor 9 is only configured to provide comparatively low resolution distance data. By changing a resolution requirement for distance data from high resolution to low resolution, it is therefore possible to switch off the Lidar device 8 and measure the distance to surrounding objects with the infrared proximity sensor 9 which has a lower resolution than the Lidar device 8.

It is furthermore possible to change a source of a specific required sensor data meaning that, for example, high resolution temperature data are collectable with the temperature sensor 10 of the vehicle 1 itself whereas low resolution temperature data may be achieved by reducing a measurement frequency and/or by receiving, preferably sporadically, temperature data from the external device 1 a, 12b.

A step S4 comprises in this example providing sensor data 29 with the resolution according to the determined resolution requirement information 28. This means that either the sensor device 6 of vehicle 1 captures the sensor data 29 and/or the external device 12a, 12b provides the respective sensor data 29 and transmits it to vehicle 1 via the communication connection so that vehicle 1 receives the provided sensor data 29.

A requirement for resolution is in other words a demand for a certain resolution. A situation can alternatively be referred to as a current state, for example, of the environment of the vehicle 1 or the vehicle 1 itself.

In the above-mentioned example, it is now determined by the resolution requirement information 28 that, for example, camera data captured by camera 7 is required with the highest resolution possible while, for example, temperature data from temperature sensor 10 is only required with the lowest resolution available. Additionally, distance data provided by either the Lidar device 8 or the infrared proximity sensor 9 can be provided with a mediocre resolution which is, for example, provided by the infrared sensor device 9 or additionally or alternatively by the Lidar device 8. By providing sensor data 29 with the resolution according to the determined resolution requirement information 28 it is hence possible to drastically decrease data traffic affecting the vehicle 1 providing a way to organize and optimize data traffic within the vehicle 1 or between the vehicle 1 and the external device 12a, 12b.

The object is to reduce unnecessary data traffic which can be achieved by reducing the resolution of individual provided sensor data 29 according to the respective resolution requirement information 28. If however, for example, due to the autonomous driving of the vehicle 1 a high resolution of specific or all sensor data 29 is reasonable or necessary, the resolution of the provided sensor data 29 can be increased according to the resolution requirement information 28 and can, for example, reach the highest resolution available. High resolution of particular sensor data 29 can alternatively be achieved by increasing or demanding external data, for example, provided by the other vehicles 12b.

Alternatively or additionally to the provided sensor data 29, a resolution of data processing performed by the control unit 5 may be increased or decreased by choosing between different processing algorithms, for example, between a fast or a comparatively slower algorithm and/or between algorithms that differ in processing power capacity. In general, step S4 comprises providing data with a resolution according to the determined resolution requirement information 28. In the following, for simplicity, it is only referred to provided sensor data 29 instead of provided data in general.

Fig. 3 shows further steps of the described method. A step S5 comprises that the resolution requirement determination algorithm 27 is applied to further environmental situation data 20' and further vehicle situation data 21 '. This is based on the fact that sensor data 29 provided with a resolution according to the determined resolution requirement information 28 typically comprises further environmental situation data 20' and/or further vehicle situation data 21 '. This means that although the resolution requirement information 28 has already been determined in step S3, the vehicle 1 continues and/or the vehicle 1 and the external device 12a, 12b continue to provide both environmental situation data 20 and vehicle situation data 21 which are here referred to as further environmental situation data 20'and further vehicle situation data 21

In a step S6 verification of a deviation of the determined further resolution requirement information 28' from the determined resolution requirement information 28 is checked. If the further resolution requirement information 28' deviates from the determined resolution requirement information 28, a step S7 comprises providing sensor data 29 with the resolution according to the determined further resolution requirement information 28'. In other words, the requirement for resolution is then changed from the previously determined resolution requirement information 28 to the further resolution requirement information 28'. It is hence possible that once the vehicle 1 leaves the urban area of the city 3 and moves towards a rural area with, for example, less traffic but remaining weather and illumination conditions, for example, the resolution requirement for the camera 7 can be decreased to a lower resolution so that data traffic within the vehicle 1 is reduced due to less need for high resolution camera data due to the changed environmental situation. Alternatively, once a change in weather or illumination condition is observed, for example, due to sudden rainfall, the resolution of, for example, the proximity measurement can be increased so that, for example, the Lidar device 8 is switched on and the infrared proximity sensor 9 with a comparatively lower resolution is switched off while preferably the camera resolution is as well increased.

If however the further resolution requirement information 28' and the resolution requirement information 28 do not deviate from each other or only slightly deviate from each other wherein the deviation is within a predefined deviation range, which is preferably relatively small, step S4 is performed. This means that further sensor data 29' are provided with a resolution according to the determined resolution requirement information 28 and not with the resolution according to the further resolution requirement information 28'. This means that if no change in environmental and vehicle situation is observed, there is also no change in resolution requirement.

Fig. 4 shows further possible steps of the method. A step S8 comprises determining obstacle data 31. This is done by applying an obstacle determination algorithm 30 to the provided sensor data 29. The obstacle data

31 represent an obstacle 14a, 14b in the environment of the vehicle 1 . In this example, there are two possible obstacles, meaning the plastic bag as obstacle 12a and the person as obstacle 12b. The obstacle data 31 is typically provided by one of the sensor devices 6 of the vehicle 1 , for example, the camera 7. Obstacle data 31 can hence be moving or static picture data provided by the camera 7. Obstacle data 31 can be further specified with the help of distance data provided, for example, by the Lidar device 8 and/or the infrared proximity sensor 9.

A step S9 comprises providing further sensor data 29'. Further sensor data 29' are sensor data 29 with an increased resolution compared to sensor data 29. After determining that an obstacle 14a, 14b is, for example, located along the route 13 of the vehicle 1 , resolution of the sensor devices 6 or the received sensor data 29 is automatically increased, preferably up to the highest resolution possible. This is done in order to, for example, more precisely analyze the detected obstacle 14a, 14b.

A step S10 hence comprises determining an obstacle significance information 33. This is done by applying an obstacle assessment algorithm

32 to the determined obstacle data 31 and/or the further sensor data 29'. The obstacle significance information 33 represents a significance of the detected obstacle 14a, 14b for a current state of the vehicle 1 and/or the driving route 13 of the vehicle 1. In case of the plastic bag as obstacle 14a there is very little significance of this obstacle 14a so that obstacle significance information 33 is low for the plastic bag. Contrarily in case of the person as obstacle 14b, who is crossing the road 12 along the route 13 of vehicle 1 , this obstacle 14b is classified as a very significant obstacle 14b which is hence connected to a high obstacle significance information 33, especially compared to the obstacle significance information 33 of the plastic bag as obstacle 14a.

A step S11 comprises determining if the determined obstacle significance information 33 exceeds an obstacle significance threshold 34. Only if this is the case, a step S12 follows. If however the determined obstacle significance information 33 does not exceed the obstacle significance threshold 34, it is possible to, for example, terminate the described method or alternatively change the resolution back to the resolution according to the previously determined resolution requirement information 28 and continue providing sensor data 29 according to step S4. In this case, the detected obstacle 14a, 14b has no further impact on the organization of data traffic affecting the vehicle 1. This is, for example, the case if the plastic bag is observed as obstacle 14a. However, in case of the person as obstacle 14b, the method continues with step S12 due to the high significance of the person as obstacle 14b to both the current state as well as the driving route 13 of the vehicle 1 . The current state may be represented by the vehicle situation data 21.

Step S12 comprises applying the resolution requirement determination algorithm 27 to the determined obstacle data 31 . Alternatively or additionally, it can be applied to the environmental situation data 20 and the vehicle situation data 21 provided with the resolution according to the determined resolution requirement information 28 or determined according to, for example, the further resolution requirement information 28'. Alternatively, the environmental situation data 20 as well as the vehicle situation data 21 can be provided with the higher resolution to which the resolution of sensor data 29 was increased in step S9. By applying the resolution requirement determination algorithm 27 in step S12, an obstacle dependent resolution requirement information 28" is determined. The determined obstacle dependent resolution requirement information 28" represents a required resolution of provided sensor data 29 due to the determined obstacle 14a, 14b. A step S13 comprises providing sensor data 29 with the resolution according to the determined obstacle dependent resolution requirement information 28". In other words, due to the obstacle 14a, 14b the required resolution can change, meaning that during approaching the person as obstacle 14b by driving along route 13 the vehicle 1 is capturing sensor data 29 with even further increased resolution if possible according to the determined obstacle dependent resolution requirement information 28". This enhancement in resolution can, for example, mean that all or at least a part of the sensor data 29 are captured by the sensor device 6 and/or received from the external device 12a, 12b with the highest resolution available. Therefore, the observed detected obstacle 14b is now further monitored in order to, for example, activate an emergency stop routine.

Such a possible emergency stop routine is sketched in Fig. 5. A step S14 comprises applying an operating information determination algorithm 35 to the determined obstacle data 31 as well as, for example, further sensor data 29' and/or in general sensor data 29. By applying the operating information determination algorithm 35 in this way, an operating information 36 is determined. The operating information 36 represents a longitudinal and/or transverse guidance of the vehicle 1. In other words, the operating information 36 can comprise, for example, a break signal for a braking system of the vehicle 1 to operate an emergency stop of vehicle 1 . This emergency stop preferably takes place in the stop area 15 of route 13 right in front of the person as obstacle 14b. The transverse guidance can include operating a steering system of the vehicle and can, for example, result in providing an alternative route 13 for the vehicle 1 . Afterwards, it is hence possible in a step S15 that the vehicle 1 is operated according to the determined operating information 36 so that the vehicle 1 passes by or stops without collision with the determined obstacle 14b. In this example, the vehicle is stopped in the stop area 15. In case of a traffic accident or a bigger object compared to the plastic bag as obstacle 14a, it may be useful to determine an alternative driving route that drives, for example, around the determined obstacle 14a, 14b meaning that it passes by the obstacle 14a, 14b without collision with it. In general, if the vehicle situation data 21 represent that the vehicle 1 is currently parked or stopped, the requirements for resolution are typically lower compared to a driving state of the vehicle 1 . Also worse weather or illumination conditions represented by respective illumination data 22 or weather data 23 cause a higher requirement in resolution due to the worse conditions for the respective sensor device 6. Infrastructure data 25 can, for example, support evaluating the current situation of the vehicle 1 , meaning that, for example, if an infrastructure element such as a traffic light is situated in front of the vehicle 1 , the currently stopped situation of the vehicle 1 can be interpreted and, for example, result in no change of resolution due to the only temporal stop. The ground condition data 26 are typically provided to, for example, change the resolution of the vibration sensor device which can, for example, be reduced in resolution on a bumpy road 2 with, for example, a high density of potholes due to little content of high resolution vibration sensor data on such a road 2 because most of the detected vibration will most likely be due to the rough ground condition of the road 2 and have no or only little impact on autonomous driving of vehicle 1 .

It is possible that the determined obstacle data 29 is stored in the data storage unit 5 of vehicle 1 and/or is transmitted to the external device 12a, 12b in order to be stored in the external device 12a, 12b. It is hence possible to save, for example, a position information of the determined obstacle 14a, 14b which is comprised by the obstacle data 31 in order provide information on the obstacle 14a, 14b for future driving events along route 13. This is, for example, helpful if the obstacle 14a, 14b is permanent or at least long-term, meaning that, for example, a road construction site is detected as obstacle 14a, 14b which may remain for a certain time so that other vehicles 12b can be warned about this obstacle 14a, 14b or that the vehicle 1 remembers the detected road construction site in time of a future drive along route 13.

Preferably, the resolution requirement information 28 is in each possible situation calculated independently. This means that the resolution requirement determination algorithm 2.1 is typically based on methods of artificial intelligence. For example, it is performed by an artificial neuronal network. It is hence possible to learn from, for example, historical data of past situations, how to connect a specific environment situation and/or vehicle situation to the resolution requirement in order to provide reasonable resolution requirement information 28.

In general, the steps described are preferably performed by the control unit 4 as long as they do not involve capturing of sensor data 29 by sensor devices 6 and/or providing sensor data 29 from the external device 12a, 12b.

Overall, the example shows how to provide a framework when an autonomous driving vehicle 1 selects a suitable array of sensor devices 6 and suitable algorithms to perform a constant analysis on the conditions of environment where the vehicle 1 is found in order to mitigate the issue of overloading the communication system of the vehicle 1 , meaning the control unit 4 with a large amount of data traffic. Step S2 can alternatively be referred to as monitoring the environment of vehicle 1 . This implies a number of activities which run in parallel. Step S3 can alternatively be referred to as classify scenario. Depending on the results of applying the resolution requirement determination algorithm 27, the parameters associated to monitor the environment, meaning different requirements for sensor data 29, are determined. In general, suitable configurations of sensor resolutions and sensor devices 6 are chosen to maximize accuracy and precision of measurements. If a change of route 13 is necessary, for example due to the obstacle 14a, 14b, information on a possible route correction is provided, as already described for step S15.

In step S1 the monitor system is turned on. The vehicle 1 connects with a backend system and collects initial data for starting the analysis. For example, it receives environmental situation data 20 from the external device 12a, 12b. Afterwards, different environmental parameters are considered with an initial value, meaning with the respective first value measured. Such parameters can represent the environment (for example a city or countryside, vehicles 12b around the vehicle 1 ), weather (for example rain, wind, temperature of air and/or asphalt) and/or infrastructure (availability of network smart environment - smart cities).

The moment the vehicle 1 activates the monitoring system, the scenario in which the vehicle 1 is found is classified using the before mentioned parameters using a suitable set of sensor devices 6 and algorithms. This means that the resolution requirement information 28 is determined. In principle, the vehicle 1 selects the sensor devices 6 and algorithms based on the statement “maximize resolution and minimize data traffic on the system”. The system can here be understood as the control unit 4 with the sensor devices 6 and/or the external devices 12a, 12b. That is, once the scenario has been classified, the vehicle 1 is able to select the devices that will allow a safe and secure analysis minimizing the data traffic on the system itself. The vehicle 1 starts monitoring any deviation from the initial scenario, meaning that the further resolution requirement information 28' is constantly determined. The vehicle 1 detects a deviation, for example, due to a change in the situation of the environment or the situation of the vehicle 1 or due to the detected obstacle 14a, 14b on its route 13 and starts a deeper analysis using for instance higher resolution sensors. This will increase the data traffic but will allow the analysis of the obstacle 14a, 14b more accurately and provide an input for allowing the vehicle 1 to decide a proper reaction. For instance, the vehicle 1 must be able to discriminate whether the obstacle 14a, 14b is represented by the small plastic bag or the person. The vehicle 1 continues to analyze the situation and eventually improves the analysis until a proper reaction is found and the obstacle 14a, 14b is avoided as described by steps S14 and S15. Once the emergency is passed, the vehicle 1 can return using lower resolution sensors avoiding to stress the internal communication system, meaning for example data traffic within the vehicle 1 .

The system of interest is consisting of a monitoring system, composed of a number of sensor devices 6 and a software system located on the control unit 4. The sensor devices 6 are classified as high resolution sensors, providing high precision, high accuracy but higher data traffic and/or low resolution all providing coarser results but lower output in terms of data traffic. The usage of specific sensor devices 6 forming a sensor array is selected by a set of algorithms. Specific parameters are used to trigger the selection of specific sensor devices 6 with high or low resolution. Ad-hoc algorithms are used to perform both the selection of the sensor device 6 to be used and the analysis of the provided data, particularly the provided sensor data 29, afterwards.

REFERENCE SIGNS:

1 vehicle

2 road

3 city

4 control unit

5 data storage unit

6 sensor device

7 camera

8 Lidar device

9 infrared proximity sensor

10 temperature sensor

11 communication interface

12a, 12b external device

13 route

14a, 14b obstacle

15 stop area

20 environmental situation data

21 vehicle situation data

22 illumination data

23 weather data

24 environmental type data

25 infrastructure data

26 ground condition data

27 resolution requirement determination algorithm

28 resolution requirement information

28' further resolution requirement information

28" obstacle dependent resolution requirement information

29 sensor data

29' further sensor data

30 obstacle determination algorithm

31 obstacle data

32 obstacle assessment algorithm

33 obstacle significance information obstacle significance threshold operating information determination algorithm operating information steps