Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD FOR WHITE BALANCE OF AN IMAGE PRESENTATION AND CAMERA SYSTEM FOR A MOTOR VEHICLE
Document Type and Number:
WIPO Patent Application WO/2014/118167
Kind Code:
A1
Abstract:
The invention relates to a method for white balance of an image provided by means of a camera of a motor vehicle by: determining color values (U, V) of the pixels in a color space (U, V); selecting a subset of the pixels according to a preset selection criterion; determining a color temperature of the image based on the color values (U, V) of the selected subset of the pixels; and performing the white balance depending on the color temperature, wherein a main color space area around an origin (21) of the color space (U, V) and at least one further color space area (22, 23, 24, 25) different from the main color space area are defined in the preset color space (U, V), wherein it is checked if a preset minimum number of the pixels required for determining the color temperature is within the main color space area, and in this case, those pixels are selected according to the selection criterion, which are within the main color space area, and otherwise, those pixels are selected according to the selection criterion, which are within a further color space area (22, 23, 24, 25).

Inventors:
ZLOKOLICA VLADIMIR (IE)
DEEGAN BRIAN MICHAEL THOMAS (IE)
DENNY PATRICK EOGHAN (IE)
Application Number:
PCT/EP2014/051614
Publication Date:
August 07, 2014
Filing Date:
January 28, 2014
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
CONNAUGHT ELECTRONICS LTD (IE)
International Classes:
H04N9/73; B60K35/00
Domestic Patent References:
WO2002007426A22002-01-24
WO2011082716A12011-07-14
Foreign References:
US20120288145A12012-11-15
EP1585345A22005-10-12
US20090021647A12009-01-22
US7139412B22006-11-21
US20110156887A12011-06-30
EP2012271A22009-01-07
US7139412B22006-11-21
Attorney, Agent or Firm:
JAUREGUI URBAHN, Kristian (Laiernstr. 12, Bietigheim-Bissingen, DE)
Download PDF:
Claims:
Claims

1 . Method for white balance of an image presentation (13), which is displayed on a display device (8) of a motor vehicle (1 ), wherein an image (I3, I4, I5, I6) of an environmental region (9, 10, 1 1 , 12) of the motor vehicle (1 ) is provided by means of at least one camera (3, 4, 5, 6) of the motor vehicle (1 ) and the image presentation (13) for display on the display device (8) is generated from at least one region (Ι3', Ι4', Ι5', Ι6') of the image (I3, I4, I5, I6), including the steps of:

- mapping pixels of the image (I3, I4, I5, I6) into a predetermined color space (U, V) and herein determining color values (U, V) of the pixels in the color space (U, V),

- selecting a subset of the pixels according to a preset selection criterion,

- determining a color temperature (UC, VC) of the image (I3, I4, I5, I6) based on the color values (U, V) of the selected subset of the pixels,

- performing the white balance depending on the color temperature (UC, VC), characterized in that

in the preset color space (U, V), a main color space area (20) around an origin (21 ) of the color space (U, V) and at least one further color space area (22, 23, 24, 25) different from the main color space area (20) are defined, wherein it is checked if a preset minimum number (TNO) of the pixels required for determining the color temperature (UC, VC) is located within the main color space area (20), and

- in this case, those pixels are selected according to the selection criterion, which are within the main color space area (20), and otherwise

- those pixels are selected according to the selection criterion, which are within a further color space area (22, 23, 24, 25).

2. Method according to claim 1 ,

characterized in that

the pixels are mapped into a UV color space (U, V) and the chrominance values of the pixels are determined as the color values (U, V), wherein an average of U values of the selected pixels and/or an average of V values of the selected pixels is determined as the color temperature (UC, VC). Method according to claim 1 or 2,

characterized in that

the performance of the white balance includes that depending on the color temperature (UC, VC), a gain factor (FR, FG, FB) of at least one color channel (R, G, B) of the camera (3, 4, 5, 6), in particular a gain factor (FR) of the red color channel (R) and/or a gain factor (FB) of the blue color channel (B), is adjusted.

Method according to claim 3,

characterized in that

by means of the camera (3, 4, 5, 6) a temporal sequence of frames (I3, I4, I5, I6) is provided and the white balance is performed by iteratively adapting the gain factor (FR, FG, FB) over the frames (I3, I4, I5, I6) depending on the color temperature (UC, VC) of the respectively preceding image (I3, I4, I5, I6).

Method according to any one of the preceding claims,

characterized in that

the white balance is controlled by means of a control device (7) separate from the camera (3, 4, 5, 6), by means of which the image presentation (13) for display on the display device (8) is generated, wherein control signals are communicated from the control device (7) to the camera (3, 4, 5, 6), due to which the white balance is performed within the camera (3, 4, 5, 6), in particular a gain factor (FR, FG, FB) of at least one color channel (R, G, B) of the camera (3, 4, 5, 6) is adjusted.

Method according to any one of the preceding claims,

characterized in that

an area is defined as the main color space area (20), which includes exclusively pixels, the chrominance values of which satisfy the following condition:

wherein U and V denote the chrominance values of a pixel and T1 denotes a first limit value.

Method according to any one of the preceding claims,

characterized in that

an area is defined as the main color space area (20), which includes exclusively pixels, the chrominance values of which satisfy the following condition: {U > -T2;V < T2) or (u < T2;V > -T2) ,

wherein U and V denote the chrominance values of a pixel and T2 denotes a second limit value.

8. Method according to any one of the preceding claims,

characterized in that

the at least one further color space area (22, 23, 24, 25) is exclusively located within a quadrant (Q1 , Q2, Q3, Q4) of the color space (U, V), in particular is bounded by axes of the color space (U, V) on the one hand and additionally by a preset limit function (26) on the other hand.

9. Method according to any one of the preceding claims,

characterized in that

in addition to the main color space area (20), two further color space areas (23, 25) different from each other are defined, one of which is located within the second quadrant (Q2) of the color space (U, V) and the other of which is located within the fourth quadrant (Q4) of the color space (U, V), and that to these further color space areas (23, 25) the maximum value is respectively determined from the absolute values of all of the color values (U, V) of the respective further color space area (23, 25), wherein, if the number (NO) of the pixels located within the main color space area (20) is less than the preset minimum number (TNO), depending on a comparison of the maximum values of the further color space areas (23, 25), it is determined, which one of the further color space areas (23, 25) is selected, the pixels of which are taken as a basis for determining the color temperature (UC, VC).

10. Method according to any one of the preceding claims,

characterized in that

in addition to the main color space area (20), four further color space areas (22, 23, 24, 25) different from each other are overall defined:

- a first further color space area (22) within the first quadrant (Q1 ) of the color space (U, V), which includes exclusively positive U values and positive V values,

- a second further color space (23) within the second quadrant (Q2) of the color space (U, V), which includes exclusively negative U values and positive V values, - a third further color space area (24) within the third quadrant (Q3) of the color space (U, V), which includes exclusively negative U values and negative V values, and

- a fourth further color space area (25) within the fourth quadrant (Q4) of the color space (U, V), which includes exclusively positive U values and negative V values,

wherein, if the number (NO) of the pixels located within the main color space area (20) is lower than the preset minimum number (TNO), one of the four further color space areas (22, 23, 24, 25) is selected, the pixels of which are taken as a basis for the determination of the color temperature (UC, VC).

1 1 . Method according to claim 10,

characterized in that

a temporal sequence of frames (I3, I4, I5, I6) is provided by means of the camera (3, 4, 5, 6) and the white balance is iteratively performed over the frames (I3, I4, I5, I6), wherein, if the preset minimum number (TNO) of pixels is located within the main color space area (20), but a period of time, for which the color temperature (UC, VC) is outside of a preset range of set values, exceeds a preset threshold and/or the color temperature (UC, VC) diverges from the preset range of set values, the second or the fourth further color space area (23, 25) is selected and the pixels of this selected color space area (23, 25) are taken as a basis for the determination of the color temperature (UC, VC).

12. Method according to claim 10 or 1 1 ,

characterized in that

if the number (NO) of the pixels located within the main color space area (20) is less than the preset minimum number (TNO), it is determined, which one of the four further color space areas (22, 23, 24, 25) includes the greatest number of pixels.

13. Method according to claim 12,

characterized in that

if the second or the fourth further color space area (23, 25) includes the greatest number of pixels, one of these color space areas (23, 25) is selected and the pixels of this selected color space area (23, 25) is taken as a basis for the determination of the color temperature (UC, VC).

14. Method according to claim 12 or 13,

characterized in that

if the first or the third further color space area (22, 24) includes the greatest number of pixels, the selection of one of the color space areas (22, 23, 24, 25) for the determination of the color temperature (UC, VC) is omitted and the white balance is performed independently of the color temperature (UC, VC).

15. Method according to any one of the preceding claims,

characterized in that

the image presentation (13) for display on the display device (8) is generated from a partial region (Ι3', Ι4', Ι5', Ι6') of the image (I3, I4, I5, I6) and exclusively pixels of the displayed partial region (Ι3', Ι4', Ι5', Ι6') are taken as a basis for determining the color temperature (UC, VC).

16. Method according to any one of the preceding claims,

characterized in that

at least two cameras (3, 4, 5, 6) of the motor vehicle (1 ) each provide an image (I3, I4, I5, I6) of an environmental region (9, 10, 1 1 , 12) of the motor vehicle (1 ), and the image presentation (13) for display on the display device (8) is generated from respective partial regions (Ι3', Ι4', Ι5', Ι6') of the images (I3, I4, I5, I6), wherein the white balance of the respective partial regions (Ι3', Ι4', Ι5', Ι6') is performed individually for each camera (3, 4, 5, 6).

17. Method according to any one of the preceding claims,

characterized in that

at least two cameras (3, 4, 5, 6) of the motor vehicle (1 ) each provide an image (I3, I4, I5, I6) of an environmental region (9, 10, 1 1 , 12) of the motor vehicle (1 ) and the image presentation (13) for display on the display device (8) is generated from respective partial regions (Ι3', Ι4', Ι5', Ι6') of the images (I3, I4, I5, I6), wherein the partial regions (Ι3', Ι4', Ι5', Ι6') of the images overlap each other in an overlap area, and wherein for performing the white balance of the respective partial region (Ι3', Ι4', Ι5', Ι6'), luminance values (Y) of the partial region (Ι3', Ι4', Ι5', Ι6') in the overlap area are acquired and the size (T1 ) of the main color space area (20) is determined depending on the luminance values (Y).

18. Camera system (2) for a motor vehicle (1 ), including at least one camera (3, 4, 5, 6) for providing images (I3, I4, I5, I6) of an environmental region (9, 10, 1 1 , 12) of the motor vehicle (1 ), and including a control device (7), which is adapted to perform a method according to any one of the preceding claims.

Description:
Method for white balance of an image presentation and camera system for a motor vehicle

The invention relates to a method for white balance of an image presentation, which is displayed on a display device of a motor vehicle, wherein an image of an environmental region of the motor vehicle is provided by means of at least one camera of the motor vehicle and the image presentation for display on the display device is generated from at least one region of the image. Pixels of the image are mapped into a predetermined color space, and herein color values of the pixels in this color space are determined. A subset of the pixels is selected according to a preset selection criterion, and a color temperature of the image is determined based on the color values of the selected subset of the pixels. The white balance is then performed depending on the color temperature. In addition, the invention relates to a camera system for a motor vehicle, which is formed for performing such a method.

Methods for performing the white balance of an image, i.e. a color correction of the image, are already prior art. In addition, it is already known to attach a plurality of cameras to a motor vehicle, which capture the environment around the motor vehicle. Then, an image presentation can be displayed on a display of the motor vehicle, which is based on the images of the cameras. For example, here, the so-called "bird eye view" can be provided from the images of all of the cameras, i.e. a top view presentation showing the motor vehicle and its environment from a bird's eye view. This top view presentation is generated from respective partial regions of the images of all of the cameras, wherein that partial region of the image is respectively used, which images the respective

environmental region up to a predetermined distance from the motor vehicle. However, the invention is not restricted to the provision of such a top view as the image

presentation. Other types of image presentations can also be generated from each one partial region of the images. At this point, three-dimensional views are exemplarily mentioned, which can include a projection to a concave surface, such as for example to the surface of a hemisphere, a paraboloid, a hyperboloid or a similar concave surface.

The generation of a top view from images of several cameras is for example known from the document US 201 1/0156887 A1 . The document EP 2 012 271 A2 too describes a method serving for providing a top view presentation from images of several cameras. That an image presentation for display on a display can be generated from images of several cameras, is additionally known from the document US 7 139 412 B2. Here, it is proposed to compensate for the differences in the color values of the images by for example averaging the color values of the images.

A method for performing the white balance in an image presentation obtained from several images is furthermore known from the document WO 201 1 /082716 A1 .

Color correcting methods, i.e. white balance methods, generally serve for correcting color casts caused in the images by different lighting sources located in the imaged

environment. The known methods for the automatic white balance usually involve two different steps: first, the color temperature of the image is determined, and then the color correction is performed based on the determined color temperature. Therein, very different approaches are known from the prior art, how the color temperature and thus the color casts in the images can be determined. One of the known approaches entails using exclusively a subset of pixels for determining the color temperature of the image, the color values of which (for example in the UV color space) satisfy a predetermined selection criterion. Thus, in calculating the color temperature, exclusively those pixels can be taken into account, which are in the UV color space around the origin of this color space and thus around the zero value. In this color space area, namely, grey pixels are expected, which present a reliable basis for the calculation of the color temperature. However, this approach is not sufficient in camera systems of motor vehicles, because the imaged environment of the motor vehicle changes relatively fast and additionally objects are often found in the environment of the motor vehicle, which are relatively large on the one hand (for example other vehicles) and also have a particularly intense or distinct and monochromatic coloring (e.g. red, blue, yellow, white or green) on the other hand, which can then cause color casts in the images. For example, a red object located in the environment of the motor vehicle can cause a blue color cast in the displayed image presentation, because the monochromatic coloring of this object causes increase of the red color average of the image. An internal white balance algorithm of the camera then attempts to compensate for this increase of the red color average by increasing a gain factor of the blue color channel and thus causes the blue color cast.

These problems can for example be countered by corresponding adaptation of the gain factors of the respective color channels of the camera if the color temperature of the image is correctly determined. However, this assumes that pixels are selected for determining the color temperature, which corresponds to pixels as grey as possible and are not associated with monochromatic artificial objects. For example, they can be those pixels, which image a region of the grey road. However, a particular challenge is in locating these grey pixels in the corresponding color space.

It is an object of the invention to demonstrate a solution, how in a method of the initially mentioned kind the white balance of the image presentation can be improved compared to the prior art such that a realistic presentation of the environment can be provided in particular to the driver on the display device in the motor vehicle.

According to the invention, this object is solved by a method as well as by a camera system having the features according to the respective independent claims.

Advantageous implementations of the invention are the subject matter of the dependent claims, of the description and of the figures.

A method according to the invention serves for white balance of an image presentation, which is displayed on a display device of a motor vehicle, wherein an image of an environmental region of the motor vehicle is provided by means of at least one camera of the motor vehicle and the image presentation is generated from at least one region of the image. Pixels of the image are mapped into a predetermined color space by determining color values of the pixels in the color space. A subset of the pixels is selected according to a preset selection criterion, and a color temperature of the image is then determined based on the color values exclusively of the selected subset of the pixels. The white balance is performed depending on the color temperature. In the preset color space, a main color space area is defined around an origin, i.e. especially the zero point, of the color space. In addition, a further color space area different from this main color space area is also defined. It is checked if a preset minimum number of the pixels required for the determination of the color temperature is within the main color space area. If this criterion is satisfied, those pixels are selected for determining the color temperature according to the selection criterion, which are within the main color space area.

Otherwise, if the number of the pixels within the main color space area is less than the preset minimum number, those pixels are selected according to the selection criterion, which are within the further color space area, or - if certain conditions are satisfied - the white balance is performed independently of the color temperature.

Besides the main color space area, which is defined around the origin of the color space and in which usually grey pixels are expected in an image without color casts, thus, at least one additional color space area is defined, into which optionally the searched grey pixels can have been shifted due to color casts in the image. Thus, it is first checked if a preset minimum number of pixels is located in the main color space area, in which normally grey pixels are expected, and if this criterion is satisfied, these pixels are used for determining the color temperature. However, due to more intense color casts in the image, it can occur that the grey pixels have been shifted from the origin of the color space and thus are located in other color space areas of the color space. This can be identified in that only few pixels are in the main color space area and the number of these pixels is lower than the preset minimum number. In this case, the pixels are used for determining the color temperature, which are in the further color space area. The method according to the invention therefore "searches" for the grey pixels, which normally are in the main color space area around the origin of the color space, but which can have been shifted with respect to the origin of the color space by color casts in the image. In this manner, the color temperature of the image can always be determined with great accuracy and in reliable manner such that the white balance too can be effectively performed and an image presentation can be displayed on the display device, which realistically images the environment of the motor vehicle.

Preferably, the pixels are mapped into a UV color space or a color space isometric to the UV space (e.g. HSV space) and the chrominance values of the pixels are determined as the color values, i.e. the U values as well as the V values. Then, an average of U values of the selected pixels and/or an average of V values of the selected pixels can be determined as the color temperature. This can be configured such that the color temperature of the image is represented by the average of U values of the selected pixels on the one hand and by the average of V values of the selected pixels on the other hand. By such a determination of the color temperature, it can be reliably recognized whether or not color casts are present in the image. If the average of U values of the selected pixels is relatively high, thus, this indicates a blue cast in the image, while a high average of the V values of the selected pixels indicates a red color cast. In addition, this embodiment has the advantage that those pixels can be located in the UV color space without much effort, which represents grey pixels.

In an embodiment, it is provided that upon performing the white balance, a gain factor of at least one color channel of the camera is adjusted depending on the determined color temperature. Therein, the gain factor of the red color channel and/or the gain factor of the blue color channel of the camera are preferably adjusted. Therein, it is in particular provided that exclusively the gain factor of the red color channel and/or the gain factor of the blue color channel are adjusted depending on the color temperature, while the green color channel of the camera is not corrected. This embodiment exploits the fact that usually red objects cause blue color casts and blue objects cause red color casts, while the green color channel is not so much influenced by monochromatic objects. The red color channel of the camera is preferably corrected depending on the V values of the selected pixels, while the blue color channel of the camera is preferably adapted depending on the U values of the selected pixels.

Preferably, the camera is a video camera, by means of which a temporal sequence of individual images (so-called frames) is provided. Then, the white balance can be performed by iterative adaptation of the gain factor of at least one color channel across the frames depending on the color temperature of the respectively preceding image. Thus, regulation of the color temperature across the frames is effected in a closed control loop considering the current color temperature. The adjustment of the gain factor of the at least one color channel of the camera can for example be configured such that the gain factor of the respective color channel, which is usually adjusted by the camera itself, is corrected with an adaptation factor, which is for example added to the gain factor. A control device (for example a central controller) separate from the camera can iteratively increase or decrease this adaptation factor depending on whether the current gain factor is too high or too low. To this, the camera can include a register, into which a variation value (for example +1 or -1 or 0) can be input to each image, by which the adaptation factor is to be varied. The current gain factor thus results as a sum of the previous gain factor on the one hand and the adaptation factor on the other hand, which in turn can be increased or decreased across the frames.

The iterative adaptation of the gain factor has the advantage that erratic variations of the coloring of the displayed image presentation are prevented and the coloring is very smoothly and uniformly changed.

The white balance can be controlled by means of a control device separate from the camera, by means of which the image presentation for display on the display device is generated. This control device can also receive images of several cameras and provide the image presentation from the images of several cameras. Control signals can be communicated from the control device to the camera, due to which the white balance is performed within the camera, in particular the mentioned gain factor of at least one color channel is adjusted. This control can be configured such that the current variation value (+1 or -1 or 0) is communicated from the control device to the camera, by which the mentioned adaptation factor of the respective color channel gain factor is to be altered. A separate control device usually has a considerably greater computing power than the camera itself such that the color temperature of the image can be very fast determined with highest accuracy.

The camera itself for its part also can have an additional white balance algorithm, which can for example be based on the so-called "grey world assumption". This hypothesis states that the averages of all of the color channels are to result in a grey value on average. Thus, the red, green and blue color components can be influenced within the camera such that the color averages of all of the three color channels are identical. However, within the camera, it is not taken into account that a monochromatic object can be located in the imaged environment, which can influence the color averages and thus the white balance algorithm within the camera. For this reason, it proves advantageous if the gain factors of the color channels are additionally influenced or adapted by the external control device, which determines the color temperature of the image based on grey pixels and is able to influence the internal white balance of the camera.

Preferably, an area is defined as the main color space area, which exclusively includes pixels, the chrominance values of which satisfy the following condition: wherein U and V denote the chrominance values of a pixel and T1 denotes a first limit value. Thus, the main color space area includes pixels, the chrominance values of which are limited to the top. Namely, grey pixels can usually be expected within this area.

Alternatively, the above condition can also be configured as follows: -T1 <U<T1 and -T1 <V<T1. This simplified condition implies that the main color space area only includes pixels, the chrominance values of which are within a square defined around the origin of the color space.

Additionally or alternatively, the main color space area can be restricted such that it includes exclusively pixels, the chrominance values of which satisfy the following condition:

{U > -T2;V < T2) or (u < T2;V > -T2) . This condition implies that the main color space area includes exclusively pixels, which in the main are in the second quadrant or in the fourth quadrant of the color space. Therein, T2 denotes a second limit value, which can be a relatively low positive value or can also be set to zero in the special case. If T2 is equal to zero, thus, an area is defined as the main color space area, in which the chrominance values of a pixel have opposite signs. This means that the U value of a pixel has a sign opposite to its V value. Thus, the main color space area is exclusively in the second and the fourth quadrant of the color space, but not in the first and the third one. This embodiment is based on the realization that the pixels, which are in the first or the third quadrant of the color space, are associated with colored objects (for example monochromatic objects) with very high likelihood and therefore are uninteresting to the determination of the color temperature. In other words, in the first and the third quadrant, pixels are not contained with high likelihood, which would correspond to grey pixels shifted by a color cast, which are searched for determining the color temperature.

The at least one further color space area can exclusively be within a quadrant of the color space. This further color space area can for example be bounded by axes of the color space on one hand and additionally by a preset limit function on the other hand. Thus, grey pixels can be located, which possibly can have been shifted in the color space by color casts in the image.

The mentioned limit function can for example be configured such that the further color space area is bounded by the axes of the color space on the one hand and by a straight line on the other hand, which intersects the two axes. In this case, the color space area is a triangular area. Alternatively, the limit function can also be defined by two straight lines, one of which extends perpendicularly to the U axis and the other of which extends perpendicularly to the V axis of the color space. Here, the further color space area is a rectangular area.

In one embodiment, it is provided that in addition to the main color space area, at least two further color space areas different from each other are defined, one of which is within the second quadrant of the color space and the other of which is within the fourth quadrant of the color space. Then, to each one of these color space areas, the maximum value can be respectively determined from magnitudes of all of the color values (in particular of all of the chrominance values) of the respective color space area. If the number of the pixels located within the main color space area is less than the preset minimum number, thus, depending on a comparison of the maximum values of the further color space areas, it can be determined, which one of the further color space areas is selected, the pixels of which are taken as a basis for determining the color temperature. Therein, in particular, that color space area is selected, the maximum color value of which is less than the maximum color value of the other color space area. This embodiment is based on the realization that colored, for example monochromatic objects usually cause higher color values, while the grey pixels influenced by color casts usually have lower color values. For determining the color temperature, thus, that quadrant is more reliable, which has relatively low color values. In contrast, if high color values are present, thus, they rather indicate a monochromatic object such that the pixels of this quadrant cannot be used for determining the color temperature.

For example, four further color space areas different from each other can be defined overall, which are taken as a basis for the algorithm, in addition to the main color space area:

- a first further color space area within the first quadrant of the color space, which includes exclusively positive U values and positive V values,

- a second further color space area within the second quadrant, which includes

exclusively negative U values and positive V values,

- a third further color space area within the third quadrant, which includes exclusively negative U values and negative V values, and

- a fourth further color space area within the fourth quadrant, which includes exclusively positive U values and negative V values.

If the number of the pixels located within the main color space area is less than the preset minimum number, thus, one of these four further color space areas can be selected to determine the color temperature of the image based on the pixels of this selected color space area. Depending on the currently existing conditions, thus, the respectively optimum color space area can be selected, the pixels of which are used for determining the color temperature.

If the preset minimum number of pixels is within the main color space area, but a period of time, for which the color temperature is outside of a preset range of set values, exceeds a preset threshold, and/or the color temperature diverges from the preset range of set values, it can be provided that the main color space area is no longer selected for determining the color temperature, but the second or the fourth further color space area. Namely, it can occur that although the required minimum number of pixels is available within the main color space area, nevertheless the color temperature cannot be regulated to the range of set values, because for example the grey pixels have been shifted from the origin of the color space into one of the quadrants due to a relatively intense color cast. This can be accordingly recognized and the determination of the color temperature can be performed based on the second or the fourth color space area.

In order to identify whether or not the result of the white balance based on the main color space area is satisfying, different algorithms can be applied: on the one hand, for example, a counter can be defined, by which the number of iterations (number of frames) is counted, in which the color temperature is outside of a preset range of set values and thus cannot converge. If the counter exceeds a preset counter limit value, the second or the fourth further color space area can be selected instead of the main color space area and the pixels of this selected color space area can be taken as a basis for the

determination of the color temperature. On the other hand, it can also be provided that a dummy parameter is defined, which is equated to the mentioned adaptation factor, but - unlike the adaptation factor - is not limited and thus can be artificially iteratively increased or reduced even beyond a limit value of the adaptation factor. If this dummy parameter reaches a preset parameter limit value, thus, the second or the fourth further color space area can be selected instead of the main color space area and the pixels of this selected color space area can be taken as a basis for determining the color temperature.

If the number of the pixels located within the main color space area is less than the preset minimum number, it can be determined, which one of the four further color space areas includes the greatest number of pixels. If the second or the fourth color space area includes the greatest number of pixels, thus, one of these color space areas can be used for determining the color temperature. However, if the first or the third color space area includes the greatest number of pixels, thus, it can be assumed that the number of the grey pixels is too low to reliably determine the color temperature. In this case, the selection of one of the color space areas for the determination of the color temperature is preferably omitted and the white balance is performed independently of the color temperature. In this case, an offset value can for example be applied to the gain factor of at least one color channel of the camera in order to counteract the internal white balance of the camera.

In one embodiment, it is provided that the image presentation for display on the display device is generated from a partial region of the image, and exclusively pixels of the displayed partial region are taken as a basis for the determination of the color temperature. For the determination of the color temperature, thus, exclusively those pixels are considered, which are displayed on the display device and thus contribute to the generation of the image presentation. Thus, color casts in the image presentation can be prevented, which otherwise could be caused by high color values of pixels, which are outside of the used partial region of the image.

It can also be provided that at least two cameras of the motor vehicle each provide an image of an environmental region of the motor vehicle and the image presentation for display on the display device is generated from respective partial regions of the images. Then, the white balance of the respective partial regions can be individually performed for each camera.

It can also be provided that the partial regions of the images overlap each other in an overlap area. In this case, for performing the white balance of the respective partial region, the luminance values of the partial region in the overlap area can be acquired. The size of the main color space area can then be determined depending on the luminance values in the overlap area, in particular depending on an average of the luminance values in the overlap area. In this manner, the size of the main color space area is adjusted depending on the brightness of the overlap area and can therefore respectively be optimally determined for different brightness levels.

Alternatively, it can also be provided that the main color space area has a fixed, preset size, which is not altered in the operation.

A camera system according to the invention for a motor vehicle includes at least one camera for providing images of an environmental region of the motor vehicle as well as an electronic control device formed for performing a method according to the invention. The control device can be a component separate from the camera or it can alternatively be integrated in the camera and thus be an internal control unit of the camera.

A motor vehicle according to the invention, in particular a passenger car, includes a camera system according to the invention.

The preferred embodiments presented with respect to the method according to the invention and the advantages thereof correspondingly apply to the camera system according to the invention as well as to the motor vehicle according to the invention. Further features of the invention are apparent from the claims, the figures and the description of figures. All of the features and feature combinations mentioned above in the description as well as the features and feature combinations mentioned below in the description of figures and/or shown in the figures alone are usable not only in the respectively specified combination, but also in other combinations or else alone.

Now, the invention is explained in more detail based on a preferred embodiment as well as with reference to the attached drawings.

There show:

Fig. 1 in schematic illustration a motor vehicle with a camera system

according to an embodiment of the invention;

Fig. 2 in schematic illustration a block diagram of the camera system,

wherein a method according to an embodiment of the invention is explained in more detail;

Fig. 3 to 7 each a color space, wherein different color space areas are explained in more detail; and

Fig. 8 to 10 flow diagrams for explaining a method according to an embodiment of the invention.

A motor vehicle 1 illustrated in Fig. 1 is for example a passenger car. The motor vehicle 1 includes a camera system 2 having a plurality of cameras 3, 4, 5, 6 in the embodiment, which are disposed distributed on the motor vehicle 1 . In the embodiment, four cameras 3, 4, 5, 6 are provided, wherein the invention is not restricted to such a number and arrangement of the cameras 3, 4, 5, 6. Basically, any number of cameras can be used, which can be mounted at different locations of the motor vehicle 1 . Alternatively to such a multi-camera system 2, a single camera can also be used.

A first camera 3 is for example disposed on the front bumper of the motor vehicle 1 . A second camera 4 is for example disposed in the rear region, for instance on the rear bumper or on a tailgate. The two lateral cameras 5, 6 can for example be integrated in the respective exterior mirrors. The cameras 3, 4, 5, 6 are electrically coupled to a control device 7, which in turn is coupled to a display device 8. The display device 8 can be an LCD display.

The cameras 3, 4, 5, 6 are video cameras, which each can capture a sequence of images per time unit and communicate it to the control device 7. The cameras 3, 4, 5, 6 have a large opening angle, for instance in a range of values from 150° to 200 °. They can also be so-called fish-eye cameras.

The camera 3 captures an environmental region 9 in front of the motor vehicle 1 . The camera 4 captures an environmental region 10 behind the motor vehicle 1 . The camera 5 captures a lateral environmental region 1 1 to the left of the motor vehicle 1 , while the camera 6 captures an environmental region 12 on the right side of the motor vehicle 1 . The cameras 3, 4, 5, 6 provide images of the respective environmental regions 9, 10, 1 1 , 12 and communicate these images to the control device 7. As is apparent from Fig. 1 , the imaged environmental regions 9, 10, 1 1 , 12 mutually overlap in pairs.

From the images of the cameras 3, 4, 5, 6, the control device 7 generates an image presentation, which is then displayed on the display device 8.

In Fig. 2, a block diagram of the camera system 2 is illustrated in highly abstract illustration. The cameras 3, 4, 5, 6 communicate images I3, I4, I5, I6 to the control device

7. The control device 7 then generates the image presentation 13 from respective partial regions I3', I4', I5'. I6' of the images I3, I4, I5, I6, which is presented on the display device

8. This image presentation 13 can for example be a top view presentation, which shows the motor vehicle 1 and its environment 9, 10, 1 1 , 12 from a bird's eye view. This image presentation 13 is generated from the respective partial regions I3', I4', I5', I6', which show the respective environmental region 9, 10, 1 1 , 12 up to a predetermined distance from the motor vehicle 1 and which are processed together to the image presentation 13. Therein, the image of the motor vehicle 1 itself can be stored in a memory of the control device 7

Alternatively, the cameras 3, 4, 5, 6 can communicate exclusively the partial regions I3', I4', I5', I6' to the control device 7 such that the "cutting out" of the partial regions is performed internally in the cameras 3, 4, 5, 6.

The control device 7 also controls the white balance of the images I3, I4, I5, I6. Therein, the white balance is performed individually for each camera 3, 4, 5, 6 such that the images I3, I4, I5, I6 are subjected to the white balance independently of each other. Therein, the control of the white balance is performed by means of the control device 7, which communicates with the individual cameras 3, 4, 5, 6 via control lines illustrated in Fig. 2. This data communication can also be bidirectionally effected.

The white balance within the respective camera 3, 4, 5, 6 is performed by adjusting gain factors of the color channels. To each color channel, i.e. to the red color channel, to the green color channel as well as to the blue color channel, a gain factor FR, FG, FB is respectively defined, which can be arbitrarily varied. As is apparent from Fig. 2, the adjustment of the color channels is effected within the camera 3 by means of gain factors FR3, FG3, FB3. The corresponding gain factors of the other cameras 4, 5, 6 are labeled in Fig. 2 in analogous manner.

Although below exclusively the white balance of the camera 3 is explained in more detail, at this point it is to be noted that the white balance of the other cameras 4, 5, 6 is performed in analogous manner.

Within the camera 3, an internal white balance is performed on the one hand by the camera 3 itself varying the corresponding gain factors FR3, FG3, FB3 depending on color values of the images I3. However, this local white balance has proved insufficient such that an additional white balance by means of the control device 7 is proposed. Namely, the control device 7 is configured to influence the color channels of the camera 3. Therein, in the embodiment, it is provided that the control device 7 can influence exclusively the gain factor FR3 of the red color channel as well as the gain factor FB3 of the blue color channel, but not the gain factor FG3 of the green color channel.

The influence of the gain factors FR3, FB3 by the control device 7 is performed such that an adaptation factor is respectively added to these gain factors: FR3 + FRA3 as well as FB3 + FBA3. Therein, FRA3, FBA3 denote the respective adaptation factors. They can be varied by means of the control device 7. To this, variation values are input into a register of the camera 3 (via the control lines), by which the respective adaptation factor FRA3, FBA3 is to be varied. The variation of the adaptation factors FRA3, FBA3 can for example be effected stepwise by ±1 or ±2. Therein, the variation of the adaptation factors FRA3, FBA3 is effected with the same frequency, with which the images I3 are also captured. This means that the register can be fed with new variation values to each frame I3.

The control device 7 can read out the current values of the gain factors FR3, FB3 as well as the current adaptation factors FRA3, FBA3 via the control lines. The adjustment of the gain factors FR3, FB3 is effected depending on a color temperature of the current image I3 or of the partial region I3' in the embodiment. Therein, the color temperature is determined based on chrominance values U and V of the image I3. In particular, therein, it is provided that exclusively pixels are taken into account, which are within the partial region I3'. However, not all of the pixels of this partial region I3' are used for determining the color temperature, but only pixels, which are selected according to a certain selection criterion and thus are associated with a certain color space area.

In the described white balance algorithm, the YUV color space is used. However, the invention is not restricted to the processing of the pixels in this color space and can also be implemented in other color spaces. In particular, the invention can be used in any color space isometric to the YUV space, for example the HSV space.

The basic idea in performing the white balance is presently in that the color temperature of the image and thus the optionally present color cast can be determined based on grey pixels, which should be in the range of the origin of the UV color space in the ideal case (i.e. without color casts). As an example of grey pixels, at this point, for example, an image of the grey road surface is mentioned, which usually has a grey coloring. In the normal case, thus, grey pixels should always be present in the range of the origin of the UV color space. However, due to a color cast, it can occur that the coloring of the grey pixels is artificially varied by the camera 3 and these pixels are shifted away from the origin in the UV color space. This can for example be the case if a larger area with a monochromatic coloring is imaged in the image I3, such as for example a larger lawn area or else a larger monochromatic object. In this case, this monochromatic surface causes a color cast and thus shift of the grey pixels in the UV color space. In the present method, thus, it is proposed to locate the grey pixels in the UV color space and then to determine the color temperature of the image I3 based on these grey pixels.

Because, as above described, the grey pixels should basically be in the range of the origin of the UV color space, first, a main color space area is defined around the origin of the UV color space, in which grey pixels are to be expected normally in an image without color casts.

With reference now to Fig. 3, a main color space area 20 is defined around the origin 21 of the UV color space. This main color space area 20 is defined such that all of the pixels are associated with this area, which satisfy the following conditions: \υ\+\ν\ < τι as well as

{U > -T2;V < T2) or {U < T2;V > -T2) , wherein U, V denote the chrominance values of the pixels, T1 denotes a first limit value and T2 denotes a second limit value, wherein it preferably applies: T2<T1 . Therein, the second limit value T2 is preferably adjusted to a relatively low value, wherein it applies in the special case: T2=0. This then means that only those pixels are associated with the main color space area 20, the chrominance values of which have opposite signs. This is based on the realization that grey pixels usually should be in the second and the fourth quadrant Q2, Q4, while the pixels located in the first and the third quadrant Q1 , Q3 usually are associated with real, colored objects.

Alternatively, the main color space area 20 can also be defined corresponding to Fig. 4. Therein, the above mentioned first condition is replaced with the following condition: - T1 <U<T1 as well as -T1 <V<T1 . Therein, the above mentioned second condition remains the same, as it is indicated in Fig. 4 with the second limit value T2.

Therein, the first limit value T1 can for example be 10. However, it is also possible to determine this first limit value T1 depending on the luminance values of the pixels of the image I3 in the overlap areas 9/1 1 as well as 9/12, as they are illustrated in Fig. 1 . If for example Y1 is the average of luminance values of the image I3 in the overlap area 9/1 1 and Y2 is the average of the luminance values of the image I3 in the overlap area 9/12, thus, the first limit value T1 can be adjusted according to the following equation:

T1 =k(Y1 +Y2), wherein k denotes a parameter to be determined, which can be in a range of values from 0.1 to 0.3.

In addition to the main color space area 20, four further color space areas 22, 23, 24, 25 are also defined, as they are for example illustrated in Fig. 5. Therein, the first further color space area 22 is exclusively located in the first quadrant Q1 , while the second color space area 23 is located in the second quadrant Q2, the third color space area 24 is located in the third quadrant Q3 and the fourth further color space area 25 is located in the fourth quadrant Q4. The further color space areas 22, 23, 24, 25 are therefore bounded by the two axes of the UV color space on the one hand and each by a limit function 26 on the other hand. This limit function 26 can now be defined in different manner.

An example of the limit function 26 is illustrated in Fig. 5. Therein, the first further color space area 22 is defined such that this color space area 22 includes exclusively pixels satisfying the following conditions:

U>0 and V>0 as well as \u\ + \v\ < T3 , wherein T3 denotes a third limit value greater than T2.

Correspondingly, exclusively those pixels are associated with the second further color space area 23, which satisfy the following conditions:

IkO and V>0 as well as \u\ + \v\ < T3 .

Exclusively pixels are associated with the third further color space area 24, which satisfy the following conditions:

IkO and V<0 as well as \u\ + \v\ < T3 .

Finally, exclusively pixels are associated with the fourth further color space area 25, which satisfy the following conditions:

U>0 and V<0 as well as \u\ + \v\ < T3 .

A further example for the further four color space areas 22 to 25 is illustrated in Fig. 6. Here, the further color space areas 22 to 25 are square areas, which are located in the respective quadrant Q1 , Q2, Q3, Q4.

A still further example is illustrated in Fig. 7. Here, the further color space areas 22 to 25 are bounded by a common limit function 26, which has the shape of a circle. The center of this circle 26 is located in the origin 21 of the UV color space, while the radius is equal to T3. Now, it is decided, which one of the color space areas 20, 22, 23, 24, 25 is selected for determining the color temperature. The pixels located in this selected color space area are then taken as a basis for determining the color temperature for performing the white balance. Therein, it is in particular provided that the color temperature is determined either based on the main color space area 20 or based on the color space areas 23 or 25, while the color space areas 22, 24 are only used for examining certain criteria, as is described in more detail below.

If one of the color space areas 20, 22 to 25 is selected, thus, its pixels are used for determining the color temperature. The determination of the color temperature is effected such that an average is calculated from all of the U values of the selected color space area, which is denoted by UC below. Furthermore, an average is calculated from all of the V values of the selected color space area, which is denoted by VC below. The color temperature is therefore represented by a pair of values UC, VC.

If the main color space area 20 is selected, thus, the white balance is performed as follows:

If UC is greater than a preset limit value TBL, the adaptation factor FBA3 of the blue color channel of the camera 3 is for example decreased by 2. However, if the value UC is greater than a limit value TBS, which in turn is less than the above limit value TBL, such that TBS<UC<TBL, then, the adaptation factor FBA3 of the blue color channel is only decreased by 1 . Otherwise, the adaptation factor is kept constant.

On the other hand, if the value UC is less than -TBL, then, the adaptation factor FBA3 is increased by 2. Correspondingly, the adaptation factor FBA3 is only increased by 1 , if it applies: -TBL<UC< -TBS.

Analogue can also apply to the adaptation factor FRA3 of the red color channel: if the value VC is greater than a limit value TRL, the adaptation factor FRA3 can be decreased by 2. If the value VC is less than TRL, but greater than a lower limit value TRS, then, the adaptation factor FBA3 is reduced by 1 . Else, the adaptation factor is kept constant.

In analogous manner, the adaptation factor FRA3 is increased by 2, if VC < -TRL. Finally, the adaptation factor FRA3 is increased by 1 , if -TRL < VC < -TRS.

The mentioned limit values can be adjusted as follows: TBL=TRL=4 and TBS=TRS=1 .5. The above mentioned adaptation is based on the fact that the U values of the pixels are proportional to the blue color channel (in the RGB color space), while the V values are proportional to the red color channel.

For the further color spaces 22 to 25 - if one of these color spaces is selected - the limit values TBL, TRL, TBS, TRS can be correspondingly adapted.

For the adaptation factors FRA3, FBA3, limit values can also be defined, which must not be exceeded/deceeded. Maximally allowable adaptation values can for example be 16 and -16 for positive and negative values, respectively. The reason for this is that high variations of the gain factors FR3, FB3 are to be avoided. This is considered as a tradeoff between the stability of the entire algorithm on the one hand and the power of the algorithm on the other hand.

In order to prevent the color cast of the image I3, the adaptation factors FRA3, FBA3 are iteratively varied across the frames until the determined color temperature UC, VC is in a preset range of set values around the zero value. This range of set values can for example be between -1 .5 and 1 .5.

However, cases can also occur, in which the adaptation factors FRA3, FBA3 already have reached the maximally allowable values 16 or -16 and the color temperature UC, VC is still outside of the range of set values, which means that still a distinct color cast exists. In order to determine this, in addition, a dummy parameter (quasi as a counter) is defined, which is equal to the adaptation factor, but also may exceed and deceed the above mentioned maximally allowable values 16 and -16, respectively. For the adaptation factor FRA3, a dummy parameter FRAS3 is defined; for the adaptation factor FBA3, a dummy parameter FBAS3 is defined. The meaning of these dummy parameters is described in more detail below.

It can also occur that enough grey pixels are not present in the main color space area 20, because they have been shifted into another color space area or simply are not present at all, because for example exclusively a very large lawn field is imaged in the image I3.

In order to provide remedy in the two mentioned cases, the mentioned further color space areas 22 to 25 are defined. A method according to an embodiment of the invention is now explained in more detail with reference to the flow diagrams in Fig. 8 to 10.

Particular criteria are introduced, based on which one of the color space areas 20, 22, 23, 24, 25 is selected for determining the color temperature. Since, as above described, it is assumed that in the quadrants Q1 and Q3, pixels are most likely present, which are associated with colored objects and do not present shifted grey pixels, it is proposed to exclude these color space areas 22, 24 from the determination of the color temperature and, if the majority of the pixels is associated with one of these two color space areas 22 or 24, to apply a preset offset value to the corresponding gain factor FR3 or FB3.

According to Fig. 8, the method begins in a first step S1 , in which for each color space area 20, 22, 23, 24, 25 an average is calculated from all of the U values as well as an average is calculated from all of the V values of the respective color space area 20, 22, 23, 24, 25, respectively. Thus, for each color space area 20, 22, 23, 24, 25, the values UC and VC are calculated.

In a further step S2, it is then checked whether or not a minimum number TN0 of pixels required for determining the color temperature exists within the main color space area 20. In other words, it is checked whether or not an actual number NO of the pixels within the main color space area 20 is greater than or equal to a minimum number TN0. If this is the case, thus, the method proceeds to a step S3; otherwise, the method proceeds to a step S4.

Within the scope of step S3, further steps are performed, which are described based on the flow diagram according to Fig. 9. If the criterion according to step S2 is satisfied, thus, first it is checked according to step S301 whether the magnitude of the dummy parameter FRAS3 is less than a preset limit value (for example 24) and the magnitude of the dummy parameter FBAS3 is less than a limit value (for example 24). If both of these criteria are satisfied, thus, this means that a sufficient number of pixels are present in the main color space area 20, and the color cast UC and VC are expected to have been shifted back to its normal (regular) limits within the main color space area 20. In this case, the color temperature estimation is performed using only the pixels from the main color space area 20 according to step S303, and the parameters FBAS3 and FRAS3 are reset to zero values. Otherwise, if one of the dummy parameters FRAS3 or FBAS3 exceeds the limit value in step S301 , it is considered that UC and VC cannot be corrected based on the pixels from the main color space area 20. This means in other words that a preset period of time is elapsed, for which the color temperature UC, VC remains outside of the range of set values and the method proceeds to step S302 and after that either to step S305 or to step S306 depending on the step S304.

If the main color space area 20 is not used for the white balance, the dummy parameters FRAS3 and FBAS3 are set to the respective adaptation factor FRA3 and FBA3, respectively, in step S302. This is only performed if the following criterion is satisfied: wherein VC 2 denotes average of the V values within the second further color space area 23, VC 4 denotes average of the V values within the fourth further color space area 25, UC 2 denotes average of the U values within the second further color space area 23, UC 4 denotes average of the U values within the fourth further color space area 25 and TL denotes a preset limit value.

Then, the method proceeds to a further step S304, in which it is decided whether the pixels of the second further color space area 23 or else the pixels of the fourth further color space area 25 are taken as a basis for the determination of the color temperature and thus the white balance. Therein, the maximum value is determined from the absolute values of all of the U values (U 2 ) and all of the V values (V 2 ) within the second further color space area 23: Max(|U 2 |; | V 2 |). Correspondingly, the maximum value is also determined from the absolute values of all of the U values (U 4 ) and of all of the V values (V 4 ) within the fourth further color space area 25: Max(|U 4 |; | V 4 |). The maximum values are compared to each other and that color space area 23 or 25 is taken as a basis for the determination of the color temperature, which has the smaller maximum value. If the criterion according to step S304 in Fig. 9 is satisfied, thus, the method proceeds to a step S305, in which the pixels of the second further color space area 23 are taken as a basis for the determination of the color temperature. Otherwise, the method proceeds to a step 306, in which the pixels of the fourth further color space area 25 are taken as a basis for the determination of the color temperature for the white balance. This is based on the fact that grey pixels usually are to have lower color values than the pixels of colored objects. In addition, corresponding corrections according to steps S305 and S306 are only performed in case the number of "grey pixels" within the areas 23, 25, respectively, is greater than a predefined threshold TN2 for the area 23 and TN4 for the area 25, wherein preferably applies: TN2=TN4. This threshold TN2, TN4 is preferably a bit smaller than the minimum number TNO. If the latter is not satisfied, no corrections will be performed in order to ensure high reliability of this correction step.

If in step S2 according to Fig. 8, it is detected that the actual number NO of the pixels within the main color space 20 is less than the minimum number TNO, the method proceeds to the step S4, within which several steps are performed, which are explained based on the flow diagram according to Fig. 10. According to step S401 , it is first checked whether the first or the third color space area 22 or 24 includes the greatest number of pixels relative to the other color space areas 23 and 25. If the first or the third color space area 22 or 24 includes the greatest number of pixels, the method proceeds to a step S402, in which the white balance is performed independently of the color temperature and thus independently of the color space areas. Here, an offset value is applied to at least one of the gain factors FR3 and/or FB3, namely such that the respective gain factor FR3 and/or FB3 is controlled to the opposite direction than it would be performed by the camera 3 within the scope of the internal white balance. In other words, it is counteracted to the internal white balance of the camera 3.

However, if the second or the fourth color space area 23 or 25 includes the greatest number of pixels, the method proceeds to a step S403, in which it is checked whether the number N2 of pixels within the second color space area 23 is greater than or equal to a preset minimum number TN2, and furthermore whether the number N4 of pixels within the fourth color space area 25 is greater than or equal to a preset minimum number TN4, wherein it can also apply: TN2=TN4. In addition, according to step S403, it is checked if the absolute values of the dummy parameters FRAS3 and FBAS3 are not too large, for example exceed a limit value of 24. If according to step S403, it is determined that N2<TN2 and N4<TN4 or the absolute values of the dummy parameters FRAS3 or FBAS3 are too high, then, the method proceeds to a step S404. Here, the two gain factors FR3 and FB3 are iteratively and thus stepwise reset to the zero value across the frames. This is effected iteratively such that the gain factors FR3, FB3 are stepwise incremented or decremented.

However, if at least one of the values N2, N4 is equal to or greater than the associated minimum number TN2, TN4, thus, the method proceeds to a step S405, in which similarly to step S304, it is decided, which one of the further color space areas 22 or 24 is taken as a basis for the determination of the color temperature for the white balance. If the maximum color value is within the fourth color space area 25, thus, the second color space area 23 is selected (step S406). Otherwise, the fourth color space area 25 is selected (step S407).