Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD FOR DISPLAYING A SCENE FROM A CERTAIN POINT OF VIEW ON A DISPLAY DEVICE OF A VEHICLE AND DRIVER ASSISTANCE SYSTEM
Document Type and Number:
WIPO Patent Application WO/2019/154774
Kind Code:
A1
Abstract:
The invention relates to a method for displaying a scene (3) from a certain point of view (4) on a display device (5) of a vehicle (1), wherein the scene (3) comprising a vehicle model (7) is provided, the certain point of view (4) is set within the scene (3) and the scene (3) from the set certain point of view (4) is displayed on the display device (5) of the vehicle (1). Moreover, when providing the scene (3), the scene (3) is provided together with at least one passenger model (8).

Inventors:
TOGHER MIKE (IE)
WARD ENDA PETER (IE)
O'MALLEY FERGAL (IE)
Application Number:
PCT/EP2019/052697
Publication Date:
August 15, 2019
Filing Date:
February 05, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
CONNAUGHT ELECTRONICS LTD (IE)
International Classes:
G06T19/20; G06T13/40; G06T17/00; B60R1/00
Foreign References:
US20170195564A12017-07-06
US20060244828A12006-11-02
US20120309520A12012-12-06
US20170195564A12017-07-06
Other References:
SALZMANN H ET AL: "The Two-User Seating Buck: Enabling Face-to-Face Discussions of Novel Car Interface Concepts", VIRTUAL REALITY CONFERENCE, 2008. VR '08. IEEE, IEEE, PISCATAWAY, NJ, USA, 8 March 2008 (2008-03-08), pages 75 - 82, XP031340002, ISBN: 978-1-4244-1971-5
PETER VAN DER MEULEN ET AL: "Ramsis - The Leading Cad Tool for Ergonomic Analysis of Vehicles", 22 July 2007, DIGITAL HUMAN MODELING; [LECTURE NOTES IN COMPUTER SCIENCE], SPRINGER BERLIN HEIDELBERG, BERLIN, HEIDELBERG, PAGE(S) 1008 - 1017, ISBN: 978-3-540-73318-8, XP019063724
Attorney, Agent or Firm:
JAUREGUI URBAHN, Kristian (DE)
Download PDF:
Claims:
Claims

1. Method for displaying a scene (3) from a certain point of view (4) on a display

device (5) of a vehicle (1 ), comprising the steps:

providing the scene (3) comprising a vehicle model (7);

setting the certain point of view (4) within the scene (3); and

displaying the scene (3) from the set certain point of view (4) on the display device (5) of the vehicle (1 );

characterized in that

when providing the scene (3), the scene (3) is provided together with at least one passenger model (8).

2. Method according to claim 1 ,

characterized in that

the scene (3), the vehicle model (7) and the at least one passenger model (8) are provided as 3D scene (3), 3D vehicle (7) and at least one 3D passenger model (8), respectively.

3. Method according to one of the preceding claims,

characterized in that

at least one first detection device (9), especially comprising at least one seat occupancy sensor, of the vehicle (1 ) determines, which seats of the vehicle (1 ) are occupied by passengers (P), wherein for each detected occupied seat of the vehicle (1 ) a corresponding passenger model (8) occupying the respective seat is provided, especially each at a corresponding position within the vehicle model (7), wherein each corresponding position of the vehicle model (7) corresponds to the respective seat of the vehicle (1 ), which has been detected to be occupied by the

corresponding passenger (P).

4. Method according to one of the preceding claims,

characterized in that if it is detected by means of at least one seat belt sensor, that a seat belt is closed, the scene (3) is provided with the at least one passenger model (8) wearing a seat belt.

5. Method according to one of the preceding claims,

characterized in that

at least one second detection device (9) in the interior of the vehicle (1 ), especially an interior camera, captures a characteristic of at least one passenger (P) of the vehicle (1 ), and when providing the scene (3) the passenger model (8) is modelled with a virtual characteristic in dependency of the captured characteristic of the at least one passenger (P).

6. Method according to claim 5,

characterized in that

the at least one captured characteristic is at least one of:

a gender of the at least one passenger (P);

one of at least two defined age classes of the at least one passenger (P), especially whether the at least one passenger (P) is an adult or a child; a hair color of the at least one passenger (P);

a skin color of the at least one passenger (P);

a length of hair of the at least one passenger (P);

a color of cloths of the at least one passenger (P);

a facial expression of the at least one passenger (P);

a gesture of the at least one passenger (P);

an appearance of a face of the at least one passenger (P).

7. Method according to one of the preceding claims,

characterized in that

if at least one predefined user input is received, the at least one passenger model (8) is adapted in dependency of the user input.

8. Method according to one of the preceding claims,

characterized in that

for each registered passenger (P) of the vehicle (1 ) a user profile is stored, which contains the passenger model (8) corresponding to the respective registered passenger (P) and at least one virtual characteristic of the passenger model (8).

9. Method according to claim 8,

characterized in that

if a registered passenger (P) is recognized, the scene (3) is provided together with the passenger model (8) stored in the profile associated with the recognized passenger (P).

10. Method according to one of the preceding claims,

characterized in that

if an animal is detected within the vehicle (1 ), the animal is classified as a certain kind of animal and the scene (3) is provided with a model of the certain kind of animal.

1 1. Method according to one of the preceding claims,

characterized in that

the certain point of view (4) is selected from a plurality of predefined different settable points of view (14).

12. Method according to one of the preceding claims,

characterized in that

the certain point of view (4) is outside the vehicle model (7).

13. Method according to one of the preceding claims,

characterized in that

the certain point of view (4) is inside the vehicle model (7).

14. Driver assistance system (2) for a vehicle (1 ) for displaying a scene (3) from a

certain point of view (4) on a display device (5), wherein the driver assistance system (2) comprises

a modelling module (6), which is configured to provide a scene (3) comprising a vehicle model (7) and to set the certain point of view (4) within the scene (3); and the display device (5), which is configured to display the scene (3) from the set certain point of view (4);

characterized in that

modelling module (6) is configured to provide the scene (3) with at least one passenger model (8).

15. Driver assistance system (2) according to claim 14,

characterized in that

the driver assistance system (2) comprises at least one sensor (9), especially a seat occupancy sensor and/or a seat belt sensor and/or a camera, wherein the modelling module (6) is configured to provide the scene (3) with the at least one passenger model (8) in dependency of a sensor input provided by the at least one sensor (9).

Description:
Method for displaying a scene from a certain point of view on a display device of a vehicle and driver assistance system

The invention relates to a method for displaying a scene, especially a 3D scene, from a certain point of view on a display device of a vehicle, wherein the scene comprising a vehicle model is provided, the certain point of view is set within the scene and the scene is displayed from the set certain point of view on the display device of the vehicle. The invention also relates to a driver assistance system for a vehicle.

There are systems known from the prior art, which manage to provide for example a 360 degree surround view of a vehicle, which can be displayed on a display device of the vehicle to assist the driver in parking the vehicle. For this purpose for example a top view of the 360 degree surrounding of the vehicle including a 2D bitmap image of the vehicle itself can be shown in planar view. But there are also systems, which are able to provide fixed point 3D or full 3D human machine interface vehicle models. These 3D animations provide a more lifelike experience for the user. Such 3D models may also allow the camera view to be adapted to provide the best views for the driver.

A similar system is for example described in US 2017/0195564 A1 . For providing a 3D scene on the basis of camera inputs a synthesized image of the surrounding of the vehicle can be provided and overlaid on a 3D bowl mesh. Additionally a vehicle model can also be superimposed over the scene on the mesh to represent the position of the vehicle on which the set of cameras is mounted. Such a 3D scene can then be displayed on a 2D display device from a certain point of view. Such a virtual view point may also be manually chosen by the driver.

Because a realistic representation of the vehicle and its surrounding on a display device is helpful for the driver for orienting himself or herself and especially the vehicle with regard to its environment, there is the wish to provide such a representation as realistic as possible.

Therefore, it is an object of the present invention to provide a method for displaying a scene from a certain point of view on a display device of a vehicle and a driver assistance system, by means of which a scene can be displayed as realistic and lifelike as possible. This object is solved by a method for displaying a scene and a driver assistance system comprising the features according to the corresponding independent claims.

Advantageous embodiments of the invention are presented in the dependent claims, the description and the drawings.

According to a method according to the invention for displaying a scene from a certain point of view on a display device of a vehicle the scene comprising a vehicle model is provided, the certain point of view is set within the scene and the scene is then displayed from the set certain point of view on the display device of the vehicle. Moreover, when providing the scene, the scene is additionally provided together with at least one passenger model.

So, advantageously, the vehicle model can be supplemented with models of humans, namely the at least one passenger model, which can be for example an avatar, inside the vehicle model, to include for example the driver and passengers in the real vehicle. Thus, the at least one passenger model can correspond to a real passenger in the real vehicle. Alternatively, it is also possible that the scene is provided with a fixed and predetermined number of passenger models independent of the number of real passengers in the real vehicle. In both cases, but especially in the first case, a much more realistic and lifelike experience can be provided for a user via the display device or generally a human machine interface, of which the display device of the vehicle is part of. Especially, to enhance realism, it is preferred that the at least one passenger model is provided in dependency of at least one sensor input, which again is provided by at least one sensor of the vehicle. For this purpose, a variety of different sensors can be used, like cameras, seat occupancy sensors or seat belt sensors, which is explained later in more detail. The scene moreover can also comprise information about the environment of the real vehicle, especially in form of images, provided by environmental sensors of the vehicle, like for example environmental cameras, laser scanners, ultrasonic sensors and/or radars. The representation of the environment of the vehicle or at least part thereof in the scene can be done as known from the prior art. For example, a virtual sphere or hemisphere or a bowl, like the initially mentioned bowl mesh, can be provided, in the middle of which the vehicle model including the at least one passenger model is positioned, and synthesized images of the surrounding of the vehicle provided by environmental sensors of the vehicle can be projected onto this sphere or hemisphere or bowl, for example from a defined position. Preferably, to enhance realism further, the scene, the vehicle model and the at least one passenger model are provided in from of a 3D scene, a 3D vehicle model and at least one 3D passenger model, respectively. Further, the 3D scene with the 3D vehicle model and the at least one 3D passenger model can be created by means of computer graphics, especially 3D computer graphics. As a 3D scene provides much more realism than a 2D scene and therefore provides a much more natural perception of the environment to a user, this is a preferred embodiment. Therefore, some of the embodiments described in the following relate to the scene as 3D scene, meaning that also the vehicle model and the at least one passenger model are provided in 3D. Nevertheless, all embodiments described in the following can also be realized with respect to the scene as 2D scene, meaning that also the vehicle model and the at least one passenger model are provided in 2D, similarly.

According to an advantageous embodiment of the invention at least one first detection device, especially comprising at least one seat occupancy sensor of the vehicle, determines, which seats of the vehicle are occupied by passengers, wherein for each detected occupied seat of the vehicle a corresponding passenger model occupying the respective seat is provided, especially each at a corresponding position within the vehicle model, wherein each corresponding position of the vehicle model corresponds to the respective seat of the vehicle, which has been detected to be occupied by the

corresponding passenger. In other words, if a passenger is detected in the real vehicle on a certain seat of the real vehicle, for example by means of a seat occupancy sensor, which can be configured as a pressure sensor, then a passenger model, like an avatar, can be provided within the vehicle model on the corresponding virtual seat. For example, if a driver is detected sitting on the driver seat then an avatar can be created sitting on the virtual driver seat of the vehicle model. By means of this representation of passengers in form of corresponding models of persons in the correct positions of the vehicle model, the realism of the represented 3D scene can be enhanced even more. By means of such a representation it is also possible for the driver to check for example whether all passengers are on board or whether there is somebody missing by just looking at the display device.

According to another advantageous embodiment of the invention, if it is detected by means of at least one seat belt sensor that a seat belt is closed, the 3D scene is provided with the at least one passenger model wearing a seat belt. Therefore, a driver of the vehicle or any other person in the vehicle, to which the displayed 3D scene is presented on the display device, can easily check, whether all passengers have closed their seat belts. Thus, safety can be enhanced. Furthermore, it is also possible to use a seat belt sensor to determine, whether a person is sitting on a certain seat. This can be done additionally or alternatively to using above mentioned seat occupancy sensor.

According to another advantageous embodiment of the invention at least one second detection device in the interior of the vehicle, especially an interior camera, captures a characteristic of the at least one passenger of the vehicle and when providing the 3D scene, the passenger model is modelled with a virtual characteristic in dependency of the captured characteristic of the at least one passenger. So advantageously also personal characteristics of the passengers can be taken into account when providing the passenger model representing the respective passengers. Therefore, the real situation can be reflected even more accurately. Thereby a variety of different characteristics, especially personal characteristics, can be captured.

Preferably, the at least one captured characteristic is at least one of a gender of at least one passenger, namely whether the at least one passenger is male or female, and/or one of at least two defined age classes of the at least one passenger, especially whether the at least one passenger is an adult or a child, and/or a hair color of at least one passenger and/or a skin color of the at least one passenger and/or a length of hair of the at least one passenger and/or a color of clothes of the at least one passenger and/or a facial expression of the at least passenger and/or a gesture of the at least one passenger and/or an appearance of a face of the at least one passenger. Therefore, advantageously a variety of different personal characteristics of passengers can be captured and represented by corresponding virtual characteristics of the passenger model, namely the avatar. For capturing those characteristics one or more cameras, especially interior cameras, are very advantageous. By means of at least one interior camera images of the interior of the vehicle including the passengers can be captured and analyzed to extract above named characteristics. Such an analysis may use feature recognition and classification methods. For ecxample, an image of at least part of at least one passenger including his/her face can be captured by means of at least one interior camera and the part of the image relating to the face of the at least one passenger can be provided as face of the passenger model representing this at least one passenger. Therefore, advantageously, a look of the passenger model can be provided that is almost identical to that one of the real passenger.

Additionally or alternatively, at least some of above named characteristics or also other characteristics can also be specified by a user himself/herself. Therefore, it is another advantageous embodiment of the invention that, if at least one predefined user input is received, the at least one passenger model is adapted in dependency of the user input. This makes it possible for a user to personalize a predefined avatar himself/herself according to his/her preferences. By means of such a user input a user may make choices referring to above named characteristics. For example, by means of such a user input a user can determine his/her gender, age, hair color, skin color, color of clothes or also clothing style, hair style, and so on. By means of the user input the user may also chose accessories, like a hat, or jewelry. Such user inputs can be received by any arbitrary input device, like a touch screen, which can be part of the display device of the vehicle, or a controller, by means of voice input, and so on.

Moreover, it is further advantageous, when for each registered passenger of the vehicle a user profile is stored, which contains the passenger model corresponding to the respective registered passenger and at least one virtual characteristic of the passenger model. This for example allows a passenger to personalize his avatar and then to store his/her settings in his/her corresponding user profile. But a passenger does not necessarily have to make choices or settings of his own. The system may also be able to automatically recognize registered passengers and to make adaptions to the stored passenger models corresponding to the registered passengers based on newly captured characteristics of the respective passengers.

Therefore, it is another advantageous embodiment of the invention that, if a registered passenger is recognized, the 3D scene is provided together with a passenger model stored in the profile associated with the recognized passenger. Thus, advantageously, it is also possible to automatically identify passengers, for example by means of facial image recognition or other identification methods, and then to automatically read out the passenger model corresponding to the identified passenger and to present this model together with a 3D scene on the display device. This is very comfortable for a user, because he/she does not have to make any selections or choices himself/herself, like manually choosing his/her profile. Nevertheless, manually choosing the profile also would be possible. The manual choice of the user can be received by the system by

corresponding user input means.

According to another advantageous embodiment of the invention, if an animal is detected within the vehicle, the animal is classified as a certain kind of animal and the 3D scene is provided with a model of the certain kind of animal. The certain kind of animal can be one of a plurality predefined certain kinds of animal, like animal classes. Such a certain kind of animal can be, for example, a dog or a cat, or more specifically also a certain kind of dog. Therefore, an automatic detection and/or recognition and/or classification of animals, for example by means of an interior camera and/or an image processing system, can support the model to be further enhanced to show if pets are present inside the vehicle.

Moreover, it is advantageous, if the certain point of view is selected from a plurality of predefined settable points of view. For example, the point of view can be selected in dependency of a corresponding user input. Moreover, by means of a user input, the user may adjust or change the point of view. Furthermore, the certain point of view can be selected from a plurality but finite number of predefined points of views or on the other hand, can also be selected from an infinite number of points of views. For example, the certain point of view may be chosen along a predefined line, for example along the line of one or more circles, for example around the vehicle model. The certain point of view can also be automatically selected, for example in dependency of a certain vehicle state or user driving action. For example, if the user is driving backwards with the vehicle, a point of view can be selected which shows the environment of the vehicle, at least in part, behind the vehicle. Also combinations thereof are possible, for example, the certain point of view can be chosen automatically in dependency of a certain vehicle state, like the current driving direction, and at the same time the automatically selected certain point of view can be changed manually by the user by selecting another point of view. Therefore, many possibilities are provided to the user to provide a representation of the vehicle, the passengers as well as the environment from the best and most suitable point of view.

Moreover, the point of view can be outside the vehicle model. When the point of view is outside the vehicle model lots of environmental information can be presented with respect to the vehicle model, which is very advantageous in parking or driving situations.

Moreover, the certain point of view can also be inside the vehicle model. For example, a view from the back seat through the wind screen onto the environment in front of the vehicle can be presented on the display device. Also a view from inside the vehicle through the rear window of the vehicle can be provided on the display. Therefore, advantageously also additional information about the interior of the vehicle can be provided by means of the respective representation and by means of a point of view, which is positioned inside the vehicle model. Also again a user might be provided with options of choosing the certain point of view being outside or inside the vehicle model. Therefore, a lot of advantageous adaptions to different situations are possible. The invention also relates to a driver assistance system for a vehicle for displaying a scene from a certain point of view on a display device, wherein the driver assistance system comprises a modelling module, which is configured to provide a scene comprising a vehicle model and to set the certain point of view within the 3D scene, and the display device, which is configured to display the scene from the set certain point of view.

Moreover, the modelling module is configured to provide the scene with at least one passenger model.

The advantages described with regard to the method according to the invention and its embodiments also apply for the driver assistance system according to the invention.

Preferably, the driver assistance system comprises at least one sensor, especially a seat occupancy sensor and/or a seat belt sensor and/or a camera, wherein the modelling module is configured to provide the scene with the at least one passenger model in dependency of a sensor input provided by the at least one sensor.

The driver assistance system can also comprise a seat occupancy sensor and a seat belt sensor for each respective seat of the vehicle.

Moreover, the display device can be configured as a 2D display device. The display device can also be configured as a touch screen. Moreover, it is also possible to provide the display device in form of a 3D display device, for example as a head mounted display or virtual reality glasses or augmented reality glasses, and so on. Therefore, the displaying of the 3D scene can be provided from the certain point of view in form of a 2D display or from two certain points of view at the same time in form of a 3D display.

The invention also relates to a vehicle comprising a driver assistance system according to the invention or its embodiments. The advantages described with regard to the method according to the invention and its embodiments therefore also apply for the vehicle according to the invention. Moreover, the preferred embodiments described with respect to the method according to the invention also provide further corresponding and advantageous embodiments of the driver assistance system and the vehicle according to the invention.

Further features of the invention are apparent from the claims, the figures and the description of figures. The features and feature combinations mentioned above in the description as well as the features and feature combinations mentioned below in the description of figures and/or shown in the figures alone are usable not only in the respectively specified combination, but also in other combinations without departing from the scope of the invention. Thus, implementations are also to be considered as encompassed and disclosed by the invention, which are not explicitly shown in the figures and explained, but arise from and can be generated by separated feature combinations from the explained implementations. Implementations and feature combinations are also to be considered as disclosed, which thus do not have all of the features of an originally formulated independent claim. Moreover, implementations and feature combinations are to be considered as disclosed, in particular by the implementations set out above, which extend beyond or deviate from the feature combinations set out in the relations of the claims.

Therein show:

Fig. 1 a schematic illustration of a vehicle comprising a driver assistance system for displaying a 3D scene from a certain point of view on a display device according to an embodiment of the invention;

Fig. 2 a schematic illustration of a 3D scene with a vehicle model and a

passenger model according to an embodiment of the invention;

Fig. 3 a schematic illustration of a 3D scene presented on a display device of the vehicle from a point of view outside the vehicle model; and

Fig. 4 a schematic illustration of a 3D scene presented on a display device of the vehicle from a point of view inside the vehicle model according to an embodiment of the invention.

Fig. 1 shows a schematic illustration of a vehicle 1 with a driver assistance system 2, especially a viewing system, for displaying a 3D scene 3 (compare Fig. 2) from a certain point of view 4 (compare Fig. 2) on a display device 5 according to an embodiment of the invention. The driver assistance system 2 comprises a modelling module 6, which is configured to provide the 3D scene 3 comprising a 3D vehicle model 7 as well as a passenger model 8 (compare Fig. 2) corresponding to at least one passenger P of the vehicle 1 . For capturing passengers P inside the vehicle 1 , the driver assistance system 2 can further comprise one or more sensors, sensor devices or sensor systems. In Fig. 1 exemplarily a sensor system 9 is schematically illustrated. The sensor system 9 can for example comprise as a seat occupancy sensor which detects if a seat of the vehicle 1 is occupied. The sensor system 9 can also comprise a seat belt sensor, which detects if a seat belt of a corresponding seat is closed or opened. Moreover, the sensor system 9 can also comprise an interior camera, which captures images of the interior of the vehicle 1.

By means of the sensor system 9 it can be detected whether or which of the seats of the vehicle 1 are occupied by passengers P, whether their corresponding seat belts are closed and moreover, optionally, personal characteristics of the passengers can be captured, for example by means of the interior camera, like whether a passenger P is male or female or is an adult or a child. Also further characteristics, like hair color, length of hair, skin color, and so on can be captured. In dependency of the information captured by means of the sensor system 9 the modelling module 6 provides a 3D scene 3 including a passenger model 8 corresponding to at least one passenger P within the vehicle 1 . This passenger model 8 can further be provided with the captured characteristics of the passenger P. Especially this passenger model 8 can be provided in form of an avatar, for example a male or a female avatar, an avatar representing a child or an adult, an avatar modelled with blond or black hair, with long or short hair, and so on, depending on the captured characteristics of the at least one passenger P. Alternatively or additionally, a user, like the passenger P, may also make his own selections concerning the appearance of the avatar representing him/her. For this purpose, the driver assistance system 2 can also comprise input means 10, by means of which a user can make a user input for selecting the one or more characteristics of the appearance of the avatar representing him/her. These selections can then optionally be stored to a user profile. For this purpose the driver assistance system 2 can also comprise a storage device 1 1 , in which user profiles of registered passengers P can be stored. Therefore, the models 8 of the persons can be personalized and stored by the user to reflect their preferences in terms of, for example, the avatar skin color, clothing color, clothing style, hair style, hair color.

Moreover, facial image recognition can be used to adjust a facial image of the model 9 of the person to be aligned with a previously stored photograph or image of that user.

Moreover, also automatic detection, recognition or classification of animals by an interior camera or image processing system can support the model to be further enhanced to show if pets are present inside the vehicle 1. Also the 3D vehicle model, namely the vehicle model 7, can also be enhanced, for example by animation. Such animations can be, for example, rotating wheels, a door opening, a boot and bonded opening and so on. So, when displaying the 3D scene on the display device 5 from the certain point of view 4, part of the vehicle model 7, or also of the passenger models 8 can be displayed moving. Also it is possible to personalize many aspects of the vehicle model 7 to reflect driver or user preferences, for example the car color can be coded to reflect the real vehicle color.

Fig. 2 shows a schematic illustration of a 3D scene 3 including a vehicle model, namely the vehicle model 7, as well as a passenger model 8, like an avatar. The 3D scene 3 can be provided with a 3D surface, like the hemisphere or bowl 12 illustrated here. On this bowl 12 images captured by means of environmental sensors 13 (compare Fig. 1 ), which can be mounted around the vehicle for capturing the environment of the vehicle 1 , can be projected. Moreover, a point of view 4 can be selected, which then represents a virtual camera position. This point of view 4 can be selected from various positions around the vehicle model 7, for example everywhere on the circles 14. The number of selectable points of views 4 can be infinite or also limited to a certain number of selectable points of view 4. Moreover, the virtual camera positions and therefor the point of view, from which finally the 3D scene is shown on the display device 5, cannot only be selected to be outside the vehicle model 7 but also inside the vehicle model 7. These two cases are exemplarily illustrated in Fig. 3 and Fig. 4.

Fig. 3 shows a schematic illustration of the displaying of the 3D scene 3 from a certain point of view 4 outside the vehicle model 7 and Fig. 4 shows a schematic illustration of the displaying of the 3D scene 3 from a certain point of view 4 inside the vehicle model 7. As also can be seen from Fig. 3 and Fig. 4, the displaying of the 3D scene 3 also contains a representation of a passenger model 7 corresponding to a passenger P in the vehicle 1 in the form of a respective avatar.

To conclude, the invention provides the possibility of enhancing the realism and to provide a more lifelike experience for the user or driver by inserting one or more models of humans or even animals inside the vehicle model, which can be based on sensor input. So, the driver assistance system, which can be configured as viewing system, can make use of various sensors, for example use a seat occupancy sensor to determine if a person is present in the vehicle or not to insert the corresponding avatar into the vehicle model. Furthermore, based on sensor inputs, such as seat belt closed, the system can be used to further enhance the model with a seat belt graphic added to the model to indicate the seat belt is being worn. An interior camera can be used to supply images of the employ image processing techniques to determine if the person is male, female, adult or child and subsequently adjust the model used to reflect the real situation. Therefore, many possibilities of adaption are provided to always reflect the current situation as realistic as possible.