TOGHER MIKE (IE)
US20090073263A1 | 2009-03-19 | |||
DE102009035422A1 | 2011-02-03 | |||
US20140152774A1 | 2014-06-05 | |||
US20100259371A1 | 2010-10-14 | |||
US20140036063A1 | 2014-02-06 | |||
US7161616B1 | 2007-01-09 |
Claims 1 . Driver assistance device (2) for a motor vehicle (1 ), including - a camera device (3) for capturing an environment (5) of the motor vehicle (1 ) and for providing camera information representing the captured environment (5) ; - at least one further sensor device (7), which has a modality different from the camera device (3), for capturing the environment (5) of the motor vehicle (1 ) and for providing sensor information representing the environment (5); - a computing device (6) for generating a camera image (10) representing the environment (5) based on the provided camera information with an associated perspective depending on the provided sensor information; - a display device (1 1 ) for displaying the generated camera image (10); characterized in that at least one representation parameter characterizing the dependency of the camera image (10) on the sensor information is adjustable by an operator during intended use of the driver assistance device (2). 2. Driver assistance device (2) according to claim 1 , characterized in that the further sensor device (7) includes an ultrasonic sensor device. 3. Driver assistance device (2) according to any one of the preceding claims, characterized in that optical emphasis of an image area (12) of the camera image (10), which corresponds to an environmental region (13) of the environment (5), in which an obstacle (9) is captured by the further sensor device (7), is adjustable by the representation parameter. 4. Driver assistance device (2) according to any one of the preceding claims, characterized in that the perspective associated with the camera image (10) is adjustable by the representation parameter, in particular a position (16) and/or an orientation of a virtual camera (18) capable of being associated with the camera image (10). 5. Driver assistance device (2) according to any one of the preceding claims, characterized in that a variation of the perspective associated with the camera image (10) is adjustable by the representation parameter, in particular in the form of a trajectory (17) of a virtual camera (18) capable of being associated with the camera image (10). 6. Driver assistance device (2) according to any one of the preceding claims, characterized in that the computing device (6) is formed to generate the camera image (10) also depending on an activation of at least one driver assistance function of the driver assistance device (2), in particular depending on an activation of a parking assistance function and/or an activation of a cross-traffic warning function and/or an activation of a pedestrian warning function, and different representation parameters are preferably adjustable for each activation. 7. Driver assistance device (2) according to any one of the preceding claims, characterized in that the camera image (10) has two image sections (10a, 10b) with a respectively associated perspective, wherein an associated representation parameter is in particular respectively adjustable for the image sections (10a, 10b). 8. Driver assistance device (2) according to claim 7, characterized in that an arrangement of the image sections (10a, 10b) relative to each other and/or a size and/or a shape of the image sections (10a, 10b) are adjustable by the representation parameter. 9. Driver assistance device (2) according to any one of the preceding claims, characterized in that the camera image (10) includes a perspective external view of the motor vehicle (1 ) and the environment (5). 10. Motor vehicle (1 ) with a driver assistance device (2) according to any one of the preceding claims. |
The invention relates to a driver assistance device for a motor vehicle including a camera device for capturing an environment of the motor vehicle and for providing camera information representing the captured environment, including a further sensor device, which has a modality different from the camera device, for capturing the environment of the motor vehicle and for providing sensor information representing the environment, as well as including a computing device for generating a camera image representing the environment based on the provided camera information with an associated perspective depending on the provided sensor information, and including a display device for displaying the generated camera image.
In order to increase the safety and the comfort in the road traffic, a driver of a motor vehicle should have an overview as good as possible over the environment of the motor vehicle driven by him. Therein, he is assisted by a series of driver assistance devices in motor vehicles. With the increasing number of sensor devices of different modalities such as for example ultrasonic sensor devices, camera sensor devices, radar sensor devices or lidar sensor devices as well as a number of algorithms, which for example allow pedestrian recognition, parking space recognition, cross-traffic recognition or the like, it is desirable to provide the different sensor information and the information provided by corresponding recognition algorithms, respectively, to the driver in coherent and intuitive manner such that both individual critical information is easily recognizable for the driver and the overall overview is not impaired at the same time.
In this context, methods are known from US 2014 036 063 A1 and US 7 161 616 B1 , in which a view is automatically adapted depending on a status of the motor vehicle. Thus, an image of a rearward facing camera is for example automatically overlaid on a display device upon engaging the reverse gear. If the motor vehicle for example turns to the right, a larger environmental area of the right side of the motor vehicle can also be automatically presented on the display device. Thus, there is the object to provide a driver of a motor vehicle having sensor devices of different modalities with a best possible overview over the environment of the motor vehicle.
This object is solved by the subject matter of the independent claim. Advantageous embodiments are apparent from the dependent claims, the description and the figures.
An aspect of the invention relates to a driver assistance device for a motor vehicle including a camera device, at least one further sensor device (thus one or more further sensor devices), a computing device and a display device. Therein, the camera device serves for capturing an environment of the motor vehicle by at least one camera, thus one or more cameras, and for providing camera information representing the captured environment. This camera information can correspondingly include a respective raw image or a respective raw image series of one or more cameras. Therein, the further sensor device has a modality different from the camera device, thus is associated with another modality or based on another modality. Thus, it can be based on a different physical operating principle and thus in particular provides qualitatively different sensor information. Therein, the further sensor device serves for capturing the environment of the motor vehicle and for providing sensor information representing the environment in addition to the camera information.
Thus, the further sensor device can for example include or be an ultrasonic sensor device and/or a radar sensor device and/or a lidar sensor device. Here, the computing device serves for generating a camera image representing the environment based on the provided camera information with at least one associated perspective depending on the provided sensor information. Here, the perspective corresponds to a view to the environment in a preset position with a preset orientation of a viewer, for example a camera of the camera device, as the generated camera image represents it. Here, the display device serves for displaying the generated camera image, that is the display device is formed for displaying the generated camera image.
The camera information provided by the camera device, for example camera information provided in the form of one or more camera raw images, can therefore be represented depending on further sensor information for improved recognition by a driver by the proposed driver assistance device. Thus, different views, that is different camera images with different perspectives, can for example be generated from the camera information, and these perspectives with the corresponding image details can be used to for example more accurately represent critical spatial areas, in which a risk possibly impends. This can in particular be effected in real time. In particular, a respective image detail can also be selected for the camera image from a camera raw image as the camera information by the representation parameter.
Therein, at least one representation parameter recorded in the computing device, characterizing the dependency of the camera image on the sensor information, thus one or more representation parameters, is adjustable by an operator during intended use of the driver assistance device in the motor vehicle and thus alterable according to personal preferences of the operator. Thus, the operator can select a desired value from a set of values for the representation parameter or preset the desired value. The operator can perform this in a usual driving operation after delivery and completion of the production of the driver assistance device and/or the motor vehicle. For example, the corresponding representation parameter can thus be once preset by the operator after delivery and then the altered representation parameter can be recorded as a new standard in the computing device. Therein, the representation parameter can in particular be associated with the camera image and in particular determine the appearance of the camera image. Thus, in case that a certain distance to a further vehicle is detected by an ultrasonic sensor device upon parking, which is lower than a threshold value, a perspective of the camera image can for example be adapted as desired by the operator, for example switched from an ego-perspective of the motor vehicle to a bird's eye perspective, if the driver, thus the operator, can thus better assess the real distance to the further vehicle.
Thus, an operator can individually configure a representation of the environment with the camera image by the proposed driver assistance device to thus be able to ideally consider the further sensor information of the further sensor device. Thus, it can for example be provided that the driver assistance device is formed to switch after providing sensor information with present content, for example a distance to a further vehicle, which falls below a presettable or preset threshold value, from a standard operating mode with a display preset by factory on the display device to a customized operating mode with the representation parameter adjusted by the operator. Thus, the camera image is then generated based on the provided camera information depending on the provided sensor information as preset by the operator by the or via the representation parameter in the customized operating mode. Therein, after a preset time and/or upon reaching a preset state, for example parking with turned off motor, it can in particular again be automatically switched back to the standard operating mode. By adjusting the representation parameter on the part of the operator, thus, it is determined by the operator, how the view of the environment, thus the camera image, responds or is adapted to the further sensor information of the further sensor device.
This has the advantage that the view of the environment can be adapted to a respective situation identified by the sensor information of the further sensor device in a manner respectively adapted to the driver, individually optimized for the driver by the driver assistance device. This adaptation or adjustment allows the operator to define the respectively preferred view with respectively preferred perspective and/or perspective transition (for example in the form of a camera movement of a virtual camera as described below) such that comfort and safety are increased. Via adjustment of the representation parameter or the representation parameters, the operator also deals more accurately with the respective representation of the environment in the camera image, such that the operator already more accurately becomes acquainted with the system and possible characteristics such as perspective distortions in adjusting the representation parameter and configuring the driver assistance device such that the system knowledge of the driver and thereby the safety are here further increased.
Therein, the adjustment of the representation parameter or parameters does not have to be necessarily effected by the operator, rather, a default adjustment can be preset. This default adjustment can correspondingly be altered for example by operating input of the operator. Therein, one possibility of adjustment is selecting among multiple presettings from a list. This has the advantage of a very simple, clear operating input. However, it can in particular also be provided that the operator performs corresponding adjustments via a touch-sensitive display panel (touch screen), for example adjusts position and/or orientation of a virtual camera, which determines the perspective associated with the camera image, relative to a motor vehicle model represented on the display panel. Thus, a dynamic perspective alteration and/or perspective change can in particular also be preset (in simple and intuitive manner) by for example tracking a trajectory of the virtual camera on the display panel. This trajectory can then determine a movement of the virtual camera. Therein, the representation parameter or parameters can also be differently adjusted and recorded for different driving scenarios such that for example in a parking operation, the dependency on the generated camera image is differently configured than in driving in city traffic or in reaching crossroads. Thus, for example upon falling below a threshold distance, another perspective can be adjusted by the user in a parking operation of the motor vehicle than upon falling below a threshold distance in driving operation of the motor vehicle. Therein, it is provided in an advantageous embodiment that the further sensor device includes or is an ultrasonic sensor device. This has the advantage that distance information, which is only extractable from the camera information with great
computational effort, can be taken into account in the generated camera image in simple manner and with technical means available in most of the motor vehicles such that the safety is here increased. Especially concerning the subjective assessment of a distance, the individual variations are here particularly great. For example, in parking, severely different minimum distances to other vehicles are perceived as acceptable by different persons.
In a further advantageous embodiment, it is provided that an optical emphasis of an image area of the camera image or optionally of an image area of the respective image section or the respective image sections can be adjusted by the representation parameter or parameters, which corresponds to an environmental region of the environment, in which an obstacle is captured or recognized by the further sensor device. For example, the optical emphasis can include color marking and/or presented text. Here, an image portion or an image area of the camera image can be understood by an image section, as it is described below. In order to avoid here confusion of terms, the image area, which can be optically emphasized by the mentioned embodiment, is respectively an image area, which is associated with a corresponding environmental region of the environment. In contrast, the respective image sections can for example include different views of the environment from different perspectives, as described below, but each also constitute a corresponding area of the camera image.
This has the advantage that conspicuousness of the emphasis as well as size of the corresponding image area are individually adaptable such that the driver can for example ensure that a warning of the system by the optical emphasis also corresponds to the personal sensation of a corresponding risk of collision. Thereby, the driver obtains an improved, more trustable overview over the environment of the motor vehicle via the driver assistance device.
In a further advantageous embodiment, it is provided that the perspective associated with the camera image or optionally with the respective image section of the camera image is adjustable by the or one of the representation parameters, in particular a position and/or an orientation of a virtual camera capable of being associated with the camera image. Thus, a perspective external view, a so-called "bowl view", of the motor vehicle can for example be generated from the provided camera information as explained below, in which a virtual camera can then be positioned and oriented adjustable by the operator or the driver exactly such that a critical environmental region of the environment is particularly well recognizable for the operator.
This has the advantage that an optimum overview over the environment is provided to the driver corresponding to the individual characteristics and preferences of the driver here too.
In a further advantageous embodiment, it is provided that an alteration of the perspective associated with the camera image or optionally the respective image section is adjustable by the or one of the representation parameters, in particular in the form of a trajectory of a virtual camera capable of being associated with or corresponding to the camera image or optionally the respective image section. Thus, a comprehensive overview over the corresponding situation, that is the environment, can be given in dynamic manner from a plurality of viewing directions in the form of a camera movement along the trajectory. In that this is individually adjustable, the possibility is left to drivers at the same time, who perceive this rather as confusing, to obtain such a dynamic overview only in specific, exactly determined situations. This also contributes to an improved overview over the environment of the motor vehicle.
In a further advantageous embodiment, it is provided that the computing device is formed to generate the camera image also depending on an activation of at least one driver assistance function of the driver assistance device. In particular, the camera image can be generated depending on an activation of a parking assistance function and/or an activation of a cross-traffic warning function and/or an activation of a pedestrian warning function. Therein, different representation parameters, that is different values for respective representation parameters, are preferably adjustable for each activation, that is correspondingly each driving situation represented by the activation. Alternatively or additionally, the different representation parameters can also be directly adjusted depending on a recognized driving scenario, for example an inner-city driving scenario or a driving scenario at crossroads or a driving scenario at a parking space.
This has the advantage that the behavior of the driver assistance device, that is the representation of the environment, is further customized according to the personal preferences such that an adapted and thus clear overview over the environment is again provided. In a further advantageous embodiment, it is provided that the camera image comprises at least two image sections with a respectively associated perspective Therein, at least one associated representation parameter is in particular respectively adjustable for the image sections, namely preferably independently of the adjustment for the respectively other image section. Thus, a bird's eye perspective can for example be combined with an ego- perspective, that is a view generated from the camera information from a bird's eye perspective to the motor vehicle, and the environment can be displayed in a first image section of the camera image and a view generated from the camera information from an ego-perspective of the motor vehicle into the environment can be displayed in a second image section at the same time.
This has the advantage that different perspectives and thus different views of the environment can be presented to the driver at the same time such that he gets a particularly good overview over the environment. However, in that this representation is coupled to the sensor information of the further sensor device, thus, only one view, for example from the ego-perspective, can be displayed for example in a normal operating mode and only in a critical situation, for example if an obstacle is detected, the additional view, for example from the bird's eye perspective, can be displayed. Thereby, the driver is not unnecessarily distracted by additional views from the environment, but only obtains additional information in the case explicitly desired by him, which accordingly does not confuse him, but further improves the overview over the environment.
Therein, it can in particular be provided that an arrangement of the image sections relative to each other and/or a size of the image sections and/or a shape of the image sections are adjustable by the or one of the representation parameters.
This has the advantage that individual visual habits and/or visual capabilities can be considered, thus a smaller image is for example already sufficient for some drivers to obtain a sufficient overview over the environment than for others. Insofar, the individual adjustability of the arrangement and of the size of the respective image section is particularly suitable to improve an overview over the environment of the motor vehicle.
In a further advantageous embodiment, it is provided that the camera image includes a perspective external view, a so-called "bowl view", of the motor vehicle and the environment. Thus, the camera image shows the motor vehicle (as a model) and the environment from the perspective of a virtual camera located outside of the motor vehicle, the view or virtual camera image of which can be calculated from the camera information of one or more real cameras of the camera device.
This has the advantage that diverse views can be generated and especially relative positions between the own motor vehicle and further objects, for example vehicles, in the environment can be particularly well and intuitively represented. Here too, there is a plurality of possibilities due to the many possible adjustments such as for example distance, angle to the motor vehicle and the like, which correspondingly can be individually optimized in the proposed driver assistance device.
A further aspect of the invention relates to a motor vehicle with a driver assistance device according to any one of the described embodiments.
The features and feature combinations mentioned above in the description as well as the features and feature combinations mentioned below in the description of figures and/or shown in the figures alone are usable not only in the respectively specified combination, but also in other combinations without departing from the scope of the invention. Thus, implementations are also to be considered as encompassed and disclosed by the invention, which are not explicitly shown in the figures and explained, but arise from and can be generated by separated feature combinations from the explained implementations Implementations and feature combinations are also to be considered as disclosed, which thus do not have all of the features of an originally formulated independent claim.
Moreover, implementations and feature combinations are to be considered as disclosed, in particular by the implementations set out above, which extend beyond or deviate from the feature combinations set out in the relations of the claims.
Below, embodiments of the invention are explained in more detail based on schematic drawings. There show:
Fig. 1 a motor vehicle with an exemplary embodiment of a driver assistance device;
Fig. 2 an exemplary camera image of an exemplary embodiment of the driver assistance device; and
Fig. 3 an exemplary display, based on which an exemplary adjustment of a
representation parameter is explained. In the figures, identical or functionally identical elements are provided with the same reference characters.
In Fig. 1 , a motor vehicle 1 with an exemplary embodiment of a driver assistance device 2 is schematically illustrated. Therein, the driver assistance device 2 presently comprises a camera device 3 with four cameras 4a to 4d in this example. Therein, the camera device 3 serves for capturing an environment 5 of the motor vehicle 1 and providing camera information representing the captured environment 5 to a computing device 6.
Furthermore, the driver assistance device 2 comprises a further sensor device 7 presently configured as an ultrasonic device, which presently includes six ultrasonic sensors 8a to 8f, which are disposed at a vehicle rear of the motor vehicle 1 in the shown example. Correspondingly the sensor device 7 can also comprise further ultrasonic sensors at a vehicle front, but which are presently not illustrated for reasons of clarity. Thus, the sensor device 7 as the ultrasonic sensor device is associated with a modality different from the camera device 3. The sensor device 7 also serves for capturing the environment 5 of the motor vehicle 1 as well as for providing sensor information representing the environment 5, which presently includes distance information, which for example in turn represents a distance d between the motor vehicle 1 and an object 9 external to motor vehicle.
Furthermore, the driver assistance device 2 comprises the computing device 6, which is coupled to the camera device 3 and the further sensor device 7 and formed to generate or to produce a camera image 10 representing the environment 5 with an associated perspective based on the provided camera information depending on the provided sensor information. Finally, the driver assistance device 2 comprises a display device 1 1 for displaying the generated camera image 10.
Therein, at least one representation parameter characterizing the dependency of generating the camera image 10 from the sensor information is adjustable by an operator, for example a driver of the motor vehicle 1 , during intended use of the driver assistance device 2, thus in the operation of the motor vehicle 1 . Presently, a division of the camera image 10 into here two different image sectors 10a, 10b with correspondingly associated respective perspectives, presently a bird's eye perspective for the first image sector 10a and an ego-perspective for the second image sector 10b, is adjustable for different distances d as the sensor information by means of multiple representation parameters. Thereby, for example in parking, first, for example in a standard operating mode, the environment 5 from the ego-perspective of the rear camera 4c can be displayed on the display device 1 1 in the camera image 10. However, as soon as the object 9 is closer than a predetermined threshold distance, thus the distance d is less than a preset threshold distance, the driver assistance device 2 switches to a customized operating mode in this example, and division of the camera image into the two image sectors 10a, 10b is presently effected in a manner preset by the user via the adjusted representation parameter or parameters. When the parking operation is completed, thus, it can for example again be switched back to the standard operating mode with a single view of the environment 5 or it can also be completely masked out.
In Fig. 2, an exemplary camera image 10 is now illustrated. Presently, the camera image 10 is divided into multiple, here two image sections 10a, 10b, wherein a different perspective is here respectively associated with the two image sections 10a, 10b, namely a bird's eye perspective to the motor vehicle 1 and the environment 5 from the position of a virtual camera with the first image section 10a and an ego-perspective of a rear-side camera 4c (Fig. 1 ) of the motor vehicle 1 with the second image section 10b. In both image sections 10a, 10b, a respective image area 12 of the camera image 10, which corresponds to an environmental region 13 (Fig. 1 ) of the environment 5, is now optically emphasized. Therein, this optical emphasis is effected according to the representation parameter adjustable by the operator. Moreover, the perspective of the two image sections 10a, 10b can for example be individually adapted as it will be exemplarily explained in Fig. 3, and/or a size of the respective image sections 10a, 10b can be adapted to specific preferences or characteristics such as for example a visual habit of a driver.
In Fig. 3, a display of the display device 1 1 is illustrated, based on which adjustment of the representation parameter is explained. Therein, the motor vehicle 1 is illustrated here with an ultrasonic sensor 8e in a side view. Therein, a position 16, 16' and orientation of a virtual camera 18 can be shifted relative to the position of the motor vehicle 1 by an operating limb 14 of the operator. Therein, in the shown example, a preview 10a' for the corresponding resulting image section 10a is shown in an image portion 15. Thus, it is presently shown how the image sector 10a with bird's eye perspective for example shown in Fig. 2, which corresponds to a capture of the virtual camera 18 from a position 16, now is transitioned into an enlarged view, in which for example the ultrasonic sensor 8e closest to the object 9 with the associated environmental region 13 is represented in enlarged manner. Therein, a dynamic alteration of the perspective associated with the camera image 10 or the respective image section 10a, 10b can also be adjustable by the operator such that a corresponding trajectory 17 is preset for the virtual camera 18, along which the virtual camera 18 moves or can move with an object 9 detected in the environmental region 13