Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEM AND METHOD FOR PROVIDING ILLUMINATION TO AN INTERIOR OF A VEHICLE
Document Type and Number:
WIPO Patent Application WO/2015/026296
Kind Code:
A1
Abstract:
An apparatus for providing illumination to an interior of a vehicle comprises input means configured to provide input data varying according to an environment surrounding the vehicle; processing means configured to process the input data to determine characteristics of the illumination to be provided; and output means configured to provide the illumination with the determined characteristics to the interior of the vehicle; wherein the characteristics of the illumination provided by the output means match the characteristics of the environment surrounding the vehicle when the illumination is provided; and wherein the characteristics comprise colours and illuminance.

Inventors:
MUNDHENK PHILIPP (SG)
STEINHORST SEBASTIAN (SG)
LUKASIEWYCZ MARTIN (SG)
WANG KAI XIANG (SG)
Application Number:
PCT/SG2014/000391
Publication Date:
February 26, 2015
Filing Date:
August 20, 2014
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
TUM CREATE LTD (SG)
International Classes:
B60Q3/02; B60Q1/02; B60Q3/00; G06F17/00; G06F19/00; G06V10/141; G06V10/56
Foreign References:
US20110084852A12011-04-14
US6935763B22005-08-30
US5143437A1992-09-01
US7221264B22007-05-22
US20110227716A12011-09-22
US20070183163A12007-08-09
Attorney, Agent or Firm:
WATKIN, Timothy Lawrence Harvey (Tanjong PagarPO Box 636, Singapore 6, SG)
Download PDF:
Claims:
CLAIMS

1. An apparatus for providing illumination to an interior of a vehicle, the apparatus comprising:

input means configured to provide input data varying according to an environment surrounding the vehicle;

processing means configured to process the input data to determine characteristics of the illumination to be provided; and

output means configured to provide the illumination with the determined characteristics to the interior of the vehicle;

wherein the characteristics of the illumination provided by the output means match the characteristics of the environment surrounding the vehicle when the illumination is provided; and

wherein the characteristics comprise both of colours and illuminance.

2. An apparatus according to claim 1 , wherein the input data comprises an image of the environment surrounding the vehicle and the characteristics of the illumination are determined based on colours and illuminance of the image. 3. An apparatus according to claim 1 , wherein the input means is configured to acquire a video having a plurality of frames and wherein the input data comprises an image as a frame of the video.

4. An apparatus according to claim 2 or 3, wherein the image comprises a plurality of pixels, each pixel having a pixel value representing its colour and illuminance, and wherein the characteristics of the illumination are determined by calculating a mean, median or mode of the pixel values.

5. An apparatus according to any one of claims 2 - 4, wherein an area of interest is selected from the image and the characteristics of the illumination are determined from only the area of interest.

6. An apparatus according to claim 4 or 5,

wherein each pixel value comprises R, G, B sub-values indicating brightness of red, green and blue in the pixel; and wherein prior to determining the colours of the illumination, the pixel value is converted to H, S, B sub-values indicating hue, saturation and brightness of the pixel.

7. An apparatus according to any one of claims 4 - 6, wherein a weight is given to each pixel and the calculation is performed taking into account the weights.

8. An apparatus according to claim 7, wherein the pixels represent respective points in a view captured through a panel adjacent the output means; and the weight of each pixel is determined based on the proximity of the point represented by the pixel to the output means.

9. An apparatus according to any one of claims 2 - 8, wherein the image is divided into multiple segments and each segment is processed independently to determine the characteristics of the illumination.

10. An apparatus according to claim 9, wherein the output means comprises a plurality of illumination sources and the image is divided according to the number of illumination sources whose illumination characteristics are to be determined from the image.

11. An apparatus according to any one of the preceding claims, wherein the determination of the characteristics of the illumination takes into consideration the delay between the providing of the input data and the providing of the illumination to the interior of the vehicle.

12. An apparatus according to any one of the preceding claims, wherein the processing means are configured to:

determine initial characteristics of the illumination; and

adjust the initial characteristics to determine the characteristics of the illumination to be provided by the output means;

wherein the adjustment smoothens changes in the illumination provided by the output means.

13. An apparatus according to claim 2, wherein the adjustment is performed using previously determined characteristics of the illumination.

14. An apparatus according to claim 13, wherein the adjustment is performed by mixing the initial characteristics with the previously determined characteristics of the illumination. 15. An apparatus according to any one of the preceding claims, wherein the processing means is configured to transmit the determined characteristics to the output means in parts.

16. An apparatus according to any one of the preceding claims, wherein the input data comprises pre-loaded images of possible environments which can surround the vehicle.

17. An apparatus according to claim 16 wherein the processing means is configured to process each pre-loaded image to determine the characteristics of the illumination prior to the vehicle going through the environment shown in the pre-loaded image.

18. An apparatus according to claim 16 or 17, further comprising an ambient light sensor configured to detect an intensity of lighting of the environment surrounding the vehicle and wherein the processing means is configured to determine the characteristics of the illumination based on a pre-loaded image of the environment and the intensity of lighting of the environment.

19. A method for providing illumination to an interior of a vehicle, the method comprising:

providing input data varying according to an environment surrounding the vehicle;

processing the input data to determine characteristics of the illumination to be provided; and

providing the illumination with the determined characteristics to the interior of the vehicle;

wherein the characteristics of the illumination provided by the output means match the characteristics of the environment surrounding the vehicle when the illumination is provided; and

wherein the characteristics comprise both of colours and illuminance.

Description:
SYSTEM AND METHOD FOR PROVIDING

ILLUMINATION TO AN INTERIOR OF A VEHICLE

FIELD OF THE INVENTION

The present invention relates to a system and method for providing illumination to an interior of a vehicle.

BACKGROUND OF THE INVENTION

Recent statistics show that vehicles play an increasingly important role in people's lives. The number of cars sold worldwide over the last decade indicates a continuous growth in car sales. As an example, 58.89 million cars were sold in 2011 , which is a growth of 3.64% in comparison to the number of cars sold in 2010 [Sta]. Furthermore, the one billion-unit mark in worldwide car sales was reached in 2010 and the OECD transport forum expects a total number of 2.5 billion cars worldwide in 2050 [Ten 13].

People are spending increasingly more time in their cars [Vie]. In other words, the car is becoming more and more of an interim space. Therefore, the design of the car interior is gaining importance. Automotive manufacturers can for example improve the appeal of the car's interior environment by selecting high-value materials or designing appealing arrangements. In addition, the inclusion of illumination sources is also an important part of the interior design of the cars. This is because it affects the perception of the car interior, thus creating a positively or negatively perceived environment for the car occupants [PZB04, KP99]. Thus, the development of an interior lighting concept, which aims at the human psychology, can create a car environment that improves the well-being of the car occupants [WBWH07].

However, until now, the subject of interior lighting has been mostly neglected by automotive manufacturers. Several car designers have mostly considered only one or two main sources of illumination [WR10, PW00]. Furthermore, these illumination sources serve mainly functional purposes and do not serve to improve the human physiology and psychology. It is only recently that automotive manufacturers recognize the importance in providing appealing interior lighting [WBWH07, WR10]. Majority of cars today are merely equipped with one central main lamp [WR10, WBWH07] which serves mainly functional purposes, such as to increase the brightness of the passenger compartment. There are a small number of cars today which are equipped with more advanced interior lighting. However, most of such interior lighting mainly serve to create individuality so as to increase the distinctiveness of the cars [KP99, Kno08].

One of the recent trends in interior lighting of vehicles is the implementation of ambient lighting [Rei11]. Another trend is the implementation of dynamically adapting lightings [WR10, KP99]. Such lightings can help to disburden the driver from additional control tasks. In particular, such lightings automatically adjust to general ambient lighting conditions (i.e. brightness of the environment around the vehicle). They do not distract the driver from the actual driving task but instead help to create a surrounding that allows the vehicle's occupants to feel less tired and good [KP99, WBWH07].

There are several drawbacks in current interior lighting in vehicles. First, current interior lighting in vehicles do not dynamically adapt to the driving situation. Second, current interior lighting serve mainly to enhance the orientation and perception of the passenger compartment during times of low light and not other times.

SUMMARY OF THE INVENTION

The present invention aims to provide a new and useful system and method for providing illumination to the interior of a vehicle.

In general terms, the invention proposes that illumination with characteristics varying with the environment around the vehicle is provided to the interior of the vehicle.

Specifically, a first aspect of the invention is an apparatus for providing illumination to an interior of a vehicle, the apparatus comprising:

input means configured to provide input data varying according to an environment surrounding the vehicle;

processing means configured to process the input data to determine characteristics of the illumination to be provided; and output means configured to provide the illumination with the determined characteristics to the interior of the vehicle;

wherein the characteristics of the illumination provided by the output means match the characteristics of the environment surrounding the vehicle when the illumination is provided; and wherein the characteristics comprise both of colours and illuminance.

A second aspect of the invention is a method for providing illumination to an interior of a vehicle, the method comprising:

providing input data varying according to an environment surrounding the vehicle;

processing the input data to determine characteristics of the illumination to be provided; and

providing the illumination with the determined characteristics to the interior of the vehicle;

wherein the characteristics of the illumination provided by the output means match the characteristics of the environment surrounding the vehicle when the illumination is provided; and

wherein the characteristics comprise both of colours and illuminance.

The invention helps to influence the psychology of the passengers in the vehicle in a positive manner. For example, it can induce specific feelings and moods in the passengers. The invention also enhances the experience of the passengers by offering a more intensive and natural perception of the environment surrounding the vehicle as it allows reproduction of the surrounding lighting conditions inside the vehicle. This perception is otherwise diminished due to small or tinted windows. Therefore, the invention allows projection of information into the field of view of the passengers which is otherwise not visible to them. Configuring the lightings such that the colours of these lightings correspond to the colours of the environment also helps in enhancing the feel of spaciousness in the confined compartment of a vehicle.

In particular, the invention achieves a visual perception enlargement effect as the lights provided to the interior of the vehicles adapt in real-time to the characteristics (colour and illuminance) of the environment around the vehicle. For example, the lights may be arranged around a particular window and when the colours of the lights correspond to the view from the window, one perceives the coloured light as an extension of the window and hence, perceives the window as being larger. Due to the visual perception enlargement effect, the apparatus is suitable for operating during daylight. The visual perception enlargement effect is caused by the inhomogeneous field of view of humans. This is due to the distribution of cone cells. In particular, the cone cells are concentrated in a certain area of the eye, the fovea. Fig. 1 shows the visual acuity of the human eye in relation to the fovea distance [KGS05]. As shown in Fig. 1 , the visual acuity has its maximum of one arc per minute in the area where cone cells are concentrated. Only 10° away, the visual acuity decreases to 0.1 arc per minute [Gre08]. Due to this, the image is blurred on the rest of the retina. In order to create the perceived image, the brain processes the signals provided by the eye. While doing so, the brain follows distinct patterns, which are described by the theories of Gestalt Psychology. Thus, areas which are close or have the same colour are grouped together, leading to the enlargement of the perceptual view [Roc97]. It has been proven in many studies [SVK07, KP99] that visual perception enlargement provides several positive effects. One such effect is the reduction of visual discomfort [BA06].

It is not entirely unknown to use the visual perception enlargement effect to enhance a user's experience. For example, such an effect is included in the Ambilight TV (which was introduced by the Dutch company Philips in 2004). The development of this TV was based on the idea to develop products which do not only focus on functional benefits, but also create appealing and enjoyable experiences [DH07]. In particular, the Ambilight TV uses LEDs which are mounted on its back. These LEDs serve to reduce eyestrain and visual discomfort, and enlarge visual perception. One of the recent models, the 6000 series, has 10 LEDs mounted on both sides of the TV. The colour of the LEDs illuminate in the same colour as the current image on the television. To achieve this, integrated electronics are used to analyze the input video signal in real-time to determine the colour for each LED. The coloured light is reflected off the wall behind the TV and this creates a perceived enlargement of the TV image [vdH08]. For example, when the TV is the main source of light in a dark room, one often experiences visual discomfort from watching TV. To reduce this visual discomfort, the LEDs of the Ambilight TV function as another lighting, apart from the actual TV screen, so that the change in brightness becomes smaller. The pupil of the person watching the Ambilight TV, which continuously adapts to the light level, is thus not as strained as it would be without the LEDs [vdH08].

However, to date, there has been no vehicle installed with interior lighting that can vary with the vehicle's environment such that the visual perception enlargement effect is achieved. Unlike in the Ambilight TV, to achieve the visual perception enlargement effect in a vehicle, one has to consider the perspective of each passenger in the vehicle. This is not straightforward as the colours and illumination of the images seen by the passenger via different see-through panels are changing with the movement of the vehicle (i.e. these are "live dynamic" images).

The enlargement of the visual perception created by this invention can help reduce the power consumed by the air conditioner in the vehicle. This is especially useful in tropical megacities, where the air conditioner in the . vehicle is usually turned on throughout the car ride so as to cool the interior of the vehicle. This advantage is explained as follows.

A reduction of the actual window size can help lower the heat flow. However, this is generally not feasible since a decreased window size will reduce the view of the passenger. As a result, the passenger's comfort is negatively affected.

However, with the visual perception enlargement effect provided by the present invention, the actual window size can be reduced and yet, the passenger will not be substantially affected as he or she perceives the window to be bigger than its actual size. Therefore, with the present invention, the window size and hence the power consumption of the air conditioner can be reduced, without negatively affecting the comfort of the passenger.

BRIEF DESCRIPTION OF THE FIGURES

Embodiments of the invention will now be described, for the sake of example only, with reference to the following drawings in which:

Fig. 1 shows visual acuity of the human eye in relation to the fovea distance;

Fig. 2 shows an apparatus for providing illumination to an interior of a vehicle according to an embodiment of the present invention; Figs. 3(a) - (c) show components of the apparatus of Fig. 2;

Fig. 4 shows the relative light intensity of a LED as a function of the scanning angle;

Fig. 5 shows an example way of installing a LED-strip of the apparatus of Fig. 2 in a car;

Fig. 6 shows the interfaces between processing means and output means of the apparatus of Fig. 2;

Fig. 7 shows a data package according to the RGB-data transmission protocol adopted for colour data transmission in the apparatus of Fig. 2;

Fig. 8 shows the encoding of the LED enumeration as single bits of one byte in the transmission protocol of Fig. 19;

Fig. 9 shows the basic structure of the single transmission protocol used for transmitting colour data for one LED in the apparatus of Fig. 2;

Fig. 10 shows the initialization protocol used in the apparatus of Fig. 2;

Fig. 11 shows the delay measurement protocol used in the apparatus of Fig. 2;

Fig. 12 shows the tasks performed by the apparatus of Fig. 2 after its configuration and initialization;

Fig. 13 shows the tasks of Fig. 12 together with two further blocks for configuring and initializing the apparatus;

Fig. 14 shows information flow in the apparatus of Fig. 2;

Fig. 15 shows the schematic of the process structure implemented on the apparatus of Fig. 2;

Fig. 16 shows representative colours determined with and without first converting the image data into the HSB space; and

Fig. 17 shows a situation where colours of the illumination provided by the apparatus of Fig. 2 is mismatched with the environment and a situation where colours of illumination provided by the apparatus of Fig. 2 matches the environment.

DETAILED DESCRIPTION OF THE EMBODIMENTS

A) CELS 200-System Overview

Fig. 2 shows an apparatus 200 for providing illumination to an interior of a vehicle according to an embodiment of the present invention. The apparatus 200 may be referred to as a Car Environment Light System (CELS 200). The CELS 200 may be described as a model which:

i. can be isolated from its surroundings (i.e. the CELS 200 may be referred to as a super-system)

ii. contains relations between different attributes (inputs, outputs, states, etc.) iii. consists of interconnected components or subsystems.

In particular, Fig. 2 shows the CELS 200 with its input and output. Due to the interaction with its surroundings, the CELS 200 is considered an open system. The CELS 200 adapts to the environment which functions as the input of the CELS 200. Furthermore, by emitting light, the CELS 200 returns output to the surroundings.

As shown in Fig. 3(a), a scene corresponding to a view seen by a person through the window of the vehicle is captured and in doing so, this scene which corresponds to the environment surrounding the vehicle is digitized into a digital image. This digital image is then processed to obtain context-sensitive lighting to be displayed to an interior of the vehicle.

In order to interact with the surroundings, interfaces with the surroundings have to be provided in the CELS 200. As shown in Fig. 3(b) and (c), the CELS 200 comprises input means 202, processing means 204 and output means 206. The relations between these three components 202, 204, 206 are illustrated by the arrows in Figs. 3(b) and (c). These components 202, 204, 206 of the CELS 200 communicate with each other by communication systems. In particular, the input means 202 are configured to provide input data varying according to an environment surrounding the vehicle. As shown in Fig. 3(a), these input means 202 may comprise colour/light sensors.

The processing means 204 are configured to process the input data to determine characteristics of the illumination to be provided. In particular, the processing means 204 serve to process the digital representation of the scene received from the input means 202 to transform the input attributes (colours and illuminance) of this scene into desired output attributes (colours and illuminance). The processed information is then communicated to the output means 206 mounted in the vehicle's interior. The output means 206 are configured to provide the illumination with the determined characteristics to the interior of the vehicle. In particular, they display lights (context- sensitive lighting) whose colours and illuminance are based on the processed information. The illumination provided by the output means 206 correspond to the environment surrounding the vehicle in the manner that the characteristics of the illumination match the characteristics of the environment surrounding the vehicle when the illumination is provided.

In this embodiment, the vehicle is in the form of a car, the output means 206 are positioned along the roof liner length of the car's interior and the information provided by the input means 202 is acquired through windows of the car. The car may be an electric taxi which is suited for operation in tropical megacities as it considers specific challenges like the constant operation of air-conditioning systems due to the climate conditions [Wit13].

In alternative embodiments, the vehicle may be in other forms such as an aeroplane and the output means 206 may be positioned along other parts of the vehicle. For example, the input data can be acquired from any other see-through panel of the aeroplane. This panel is defined by a boundary and the output means may be located along the boundary of the panel.

Also, a controller 208 may be further included. This controller 208 can be used to manipulate the determination of the characteristics of the illumination to be projected and/or to overlay additional information. For example, this controller 208 can be in the form of a user-interface to allow the user to adjust the output lighting based on his or her personal preferences. In some embodiments, the CELS is used to communicate additional information to the passengers inside the vehicle or subliminally influence the actions of passengers. For example, the output means 206 may be mounted near a vehicle door in which case, it can be used together with the controller 208 to show situation-dependent information (e.g. whether it is safe to alight from the door of the vehicle).

The input means 202, processing means 204 and output means 206 of the CELS 200 will now be described in more detail. B) Design of the CELS Components 202, 204. 206

B-1 ) Design of the Input Means 202 This section discusses possible components that can be used to implement the input means 202 of the CELS 200.

Possible image sources are cameras (real time) or image collections. The latter usually only provides images of the past and is not suitable for use in a system that considers ongoing changes. However, there are some image collections that can be considered as real-time data source. This is explained in the following.

B- 1-1) Image Databases as the Input Means 202 Several image collections can be found on the internet but most of these collections are not stored in a structured and systematic way. However, there are some image databases comprising structured image collections for example, Street View images provided by Google. In particular, the Street View images are systematically taken and saved, and each image is associated with geographic parameters (e.g. the GPS coordinates and the cardinal direction) of where the image is taken. Hence, the images can be utilized in a deliberate manner [Goo12]. For instance, by using the following unique parameters: Longitude: 103.854189; Latitude: 1.287802; Heading: 110°; Pitch: 0°; Field of View: 110°; Size: 640x640, the corresponding distinct image associated with these parameters can be downloaded. The fact that the Street View images only cover areas adjacent to streets does not interfere with the purpose of the CELS 200. Additionally, the Street View database covers most of the streets in many countries including Singapore, hence permitting the possible operation of the CELS 200 using Street View images in these countries. Therefore, it is possible to use Google Street View image collection (or any other structured image collection with online images of possible environments surrounding the car e.g. Bing Maps) as the real-time input data for the CELS 200.

To decide whether to use cameras or Google Street View as the input means 202, one evaluation criterion is the amount of integration effort required. The bulk of the integration effort required when using cameras lies in the mechanical domain. In particular, the space, position and fixture of the cameras have to be considered. In contrast, the bulk of the integration effort when using the Google Street View lies in the software domain. Since the Street View images are stored online, access to the internet is required if the images are to be downloaded from the internet. In this case, a device configured to download the Street View images serves as the input means 202. This device is able to provide internet access via a connection and may be an automotive human-machine interface that can provide internet access via a 3G connection. However, in times of low connectivity or even complete loss of either the mobile or the GPS signal, the Street View images may not be available. This results in asynchronous data, which can deteriorate the performance of the CELS 200.

Another way to use Google Street View is to download the Street View images beforehand. In this case, the input means 202 comprises a device configured to store the pre-loaded images of possible environments that can surround the vehicle. The device may also be used to download the images from the internet beforehand or alternatively, a different device is used for the download and the images are transferred to the input means 202. Such devices may be either developed or bought if available. Since the Street View images are generally not frequently updated, it is possible to process the images from the complete Street View database in advance. In other words, the processing means can process each pre-loaded image to determine the characteristics of the illumination prior to the vehicle going through the environment shown in the pre-loaded image. Therefore, the necessary computation power for the operation of the CELS 200 can be reduced in this case.

A Google Street View image is taken at a specific time (e.g. morning) of the day and under a specific weather condition (e.g. sunny). Hence, a pre-loaded image does not include information regarding the intensity of the environment lighting at the time the vehicle is at the location shown in the image. To address this, an ambient light sensor for detecting the intensity of the environment lighting surrounding the vehicle can be further incorporated in the CELS 200. For example, on cloudy days, the ambient light sensor detects environment lighting with a lower intensity whereas on sunny days, the ambient light sensor detects environment lighting with a higher intensity. The characteristics of the illumination provided to the interior of the vehicle are then determined based on not just the colours in the pre-loaded Google Street View image but also on the intensity of the environment lighting as detected by the ambient light sensor. Compared to the above-described approach to use the Street View images, the integration of one or several cameras may seem very excessive. Suitable spaces and locations have to be found for the integration of the cameras. For example, the cameras have to be positioned such that they have a clear field of view, without adversely affecting the appearance of the vehicle's interior.

As moving objects such as adjacent cars that are in motion are not included in the Street View images, it is preferred that the camera is used so that dynamic features of the environment are considered in the data processing (otherwise, the enlargement of the visual perception could be impaired). Hence, the input means 202 in the CELS 200 embodiment is realized with cameras and the input data comprises images of environments surrounding the vehicle. Instead of images, the camera may be used to acquire video data and in this case, images from the recorded video (i.e. frames of the recorded video) can be provided as input data. Further, one or more cameras may be used and other sources apart from cameras may be integrated into the CELS 200. Street View images may also be used together with images and/or videos captured by the camera(s) as the input data.

B-1-2) Camera as the Input Means 202 Issues which may affect the CELS 200 performance are identified and discussed below. The following also describes how the camera may be configured to address these issues.

Low light intensity. Since an image is obtained by exposing individual photosites to light, a low light intensity can negatively affect the image quality. The amount of light the photosites are exposed to can be increased by using a lens with a high lens speed. Moreover, the light sensitivity of the camera can be improved by using an image sensor with a high fill-factor and a bigger image sensor format. Furthermore, the amount of light the photosites are exposed to can be dynamically adjusted during operation by lowering the f-Stop and increasing the exposure time. Wide view coverage. To achieve a wide view coverage, the angle of view of the lens is preferably high. This allows a large area of the outside environment to be captured which in turn allows the possibility of choosing a smaller area of interest for further processing.

Fast-changing light conditions. Fast-changing light conditions are e.g. encountered if the vehicle with the CELS drives into a tunnel. To address this issue, the exposure time may be adjusted. Alternatively, the intensity of light the image sensor in the camera is exposed to can be reduced. This can be done by adjusting the aperture size using a controlled DC-iris lens. In particular, an integrated DC-motor can be used to continuously adjust the aperture according to motor-control signals sent by a circuit within the camera.

Egomotion. The motion of the camera (due to the motion of the vehicle) or the motion of the object the camera is trying to capture can affect a variety of camera characteristics, which are described in the following.

Egomotion causes artifacts due to the Rolling-Shutter-Effect and/or interlaced scanning. To avoid these artifacts, a global shutter and progressive scanning can be used instead. Motion blur can be reduced by decreasing the exposure time to the lowest value possible. To compensate for this decrease in exposure time, characteristics which improve the overall light sensitivity, like a bigger aperture or sensor size, are preferred. In addition, the moving vehicle causes the camera to vibrate and thus it may be preferable to include an image stabilizer.

Output Format. Preferably, whether application of a particular type of compression is reasonable is evaluated based on the output format. For this purpose, the already conducted calculation of the data rate is repeated with the recently specified values. Presuming a resolution of 1280x720 pixels at a frame rate of 25 FPS and 24 bits/pixel, the resulting data rate adds up to nearly 66 MByte/s. With the exception of USB 2.0, this data rate could be handled by the communication interfaces. However, considering that the CELS 200 preferably illuminates not only one, but two sides, a minimum of three cameras is probably preferable. This can have some limiting effects on the computer's data bus and CPU load. Therefore, the application of compression such as H.264 or M-JPEG is preferred. An example of the compression technique used in the CELS 200 embodiment and the associated communication interface is shown in Table 1 below.

Parameter Characteristics Specification

Image sensor Format Bigger

Technology CCD

Scanning technique Progressive

Resolution > 1 Megapixel

Shutter type Global

Lens Aperture Small

Focal length Short

Auto exposure Yes

Supported Image stabilization Yes

features Frame rate > 25FPS

Technical Size Small

Weight Low

Power consumption Low

Power supply 12V - 13.8V

Operating temperature >30°

Operating humidity Up to 100%

On-Board M-JPEG compression Yes

Table 1

B-2) Design of the Processing Means 204

In the CELS 200 embodiment, a computer is used as the processing means. However, the processing means may alternatively comprise Field programmable gate arrays (FPGA) or application-specific integrated circuits (ASICS).

B-3) Design of the Output Means 206

The output means 206 preferably satisfy the following requirements. Colour Depth. Preferably, the light sources are capable of providing colour fidelity so as to effectively achieve a visual perception enlargement effect. According to [MT08], the human eye can distinguish approximately 380,000 different colours. Assuming that the final colour of the emitted light is created by mixing the three primary colours red, green and blue (RGB), the necessary number of shades for each primary colour required to obtain 380,000 different colours can be calculated as V380000~ 73.This requirement implies that the ability to change the illuminance (i.e. to be dimmable) in order to create different shades is a factor to consider when deciding which light source to use.

Luminous efficacy. Preferably, the output means 206 has a high luminous efficacy (the closer the luminous efficacy is to the maximum value of 683lm/W (100%), the better). This translates to a high efficiency. In particular, the luminous efficacy is calculated as the ratio of luminous flux emitted to the power consumed by the light source [JRAA00].

Installation space. In this embodiment, the output means 206 are mounted along the roof liner length of the car's interior. To minimize the integration effort, it is preferable to use a light source which requires minimal amount of space for its installation. Therefore, it is preferable to reduce the number of additional devices required.

In the CELS 200 embodiment, a LED-strip (comprising a series of inorganic LEDs along its length) is selected to serve as the output means 206 as it fulfilled the desired characteristics mentioned above. Other possible output means that can be used includes any other form of lighting components which can be configured to vary in intensity and colours, for example filament bulbs, cold cathode fluorescent lamps and solid state light sources (e.g. inorganic LED, organic LED (OLED) and electroluminescence). B-3-1) LED-strip as the Output Means 206

RGB-LEDs. In order to achieve light colours that vary with the environment of the vehicle, it is preferable that the LEDs are capable of emitting lights of different colours. For this purpose, the LED-strip in CELS 200 utilizes RGB-LEDs which are able to display up to 16.7 million different colours by mixing the three primary colours red, green and blue.

Individually addressable. The LEDs of the LED-strip used in the CELS 200 are individually addressable to enable a smooth light colour adaptation over the whole roof liner. Hence, both the brightness and the colour of each LED can be independently controlled.

12 V input voltage. To avoid the need for additional devices such as voltage converters, LEDs which are compatible with the 12V auxiliary power grid are used in the CELS 200.

Number of LEDs per meter. One characteristic of the LED is the emittance of light. Fig. 4 shows the relative light intensity of a LED as a function of the scanning angle [Roh11]. As shown in Fig. 4, the relative light intensity of the LED decreases with the scanning angle. Thus, the number of LEDs along the LED-strip determines whether a seamless light emittance can be provided. In particular, to obtain a relative intensity of 50% with a distance of 15 mm between the LED and the diffusor, a light cone radius of tan (55°) * 15mm = 21.4mm is necessary. This implies that the distance between any two LEDs along the LED-strip has to be 42.8mm or less to achieve the relative intensity of 50%. In other words, to achieve the relative intensity of 50%, there needs to be at least 24 LEDs along every meter of the LED-strip. Although an LED strip with non-uniformly distributed LEDs may be used, this is not preferred as it makes the implementation of the CELS 200 more challenging. On the other hand, the implementation of the CELS 200 is easier with a LED strip having uniformly-distributed LEDs as there is more predictability in this case. To determine the LED-strip to be used in this embodiment (in particular, the number of LEDs along the LED-strip), two parameters, specifically LED density and total roof liner length are considered. Fig. 5 shows an example way of installing the LED-strip in the car. In this example, the LED-strip 1000 is installed along the roof liner of the car, except the portion of the roof liner along the front of the car (this is to avoid distracting the driver). C) Interfaces Between the Processing Means 204 and the Output Means 206

Fig. 6 shows the interfaces between the processing means 204 in the form of a computer 204 and the output means 206 in the form of the LED-strip. As shown in Fig. 6, the information flow between the computer and the output means 206 is facilitated by a communication interface device and a plurality of LED drivers.

OP LED- Driver In the CELS 200, each LED driver is configured to drive a LED along the LED-strip. Each LED-strip comprises a control interface for communicating with the LED drivers. Colour data (i.e. characteristics of the illumination to be provided) is transmitted from the LED drivers to the individual addressable LEDs. The LED drivers are implemented using integrated circuits (ICs) in the CELS 200.

C-2) Communication Interface Device

In one example, the LED driver is implemented using the Worldsemi WS2801 LED driver addressed using the SPI interface bus and the communication interface device is a SPI interface device implemented with a microcontroller (MCU) such as an ATmega328.

The WS2801 LED driver can support data cascading, allowing multiple drivers and LEDs to be connected in series. The SPI interface device serves as the SPI master device. The SPI master device receives signals from the processing means 204, e.g. via USB, Ethernet or similar, converts the signals to SPI signals and sends the signals to the first LED driver of the LED-strip. At the same time, each LED driver relays the SPI signals it receives in a previous clock pulse (if any) to a subsequent LED driver. This relay of SPI signals continues for as long as the transmission of data from the processing means 204 continues. As soon as the data transmission finishes, every addressed driver (i.e. every driver which has received SPI signals) drives its LED to operate according to the SPI signals it receives. Therefore, the LEDs can be set to simultaneously light up in their designated colours. The number of illuminated LEDs depends on the number of addressed drivers which in turn depends on the amount of transmitted data.

In the CELS 200, for each data transmission, the first SPI signals sent from the SPI master device are always received by the first LED driver which is associated with the first LED along the LED-strip. Furthermore, the CELS 200 is configured such that no LED driver is skipped over when performing the data transmission. However, these are not necessary and in other embodiments, the first SPI signals may be received by a LED driver associated with a LED further down the LED-strip and/or some LED drivers may be skipped over such that their associated LEDs do not light up.

C-3) Specification of the Communication Protocols C-3-1 ) Transmission Protocol

A transmission protocol is developed for the CELS 200 to transmit the colour data so that the computer is able to send the calculated representative colour to the communication interface device which in turn pushes it to the LED strip via the LED drivers. Preferably, the protocol allows detection of possible transmission errors. Further, the protocol is preferably as short as possible. To support the total size of the data to be transmitted while having a relatively short protocol, the processing means 204 in the CELS 200 are configured to transmit the colour data to the output means 206 in parts. Fig. 7 illustrates a data package according to the RGB-data transmission protocol adopted for transmitting colour data in the CELS 200. The transmit header serves to differentiate this data package from other data packages based on other protocol types. The addressing section of the data package comprising the Strip Divisor and the LED Number serves to implement the following two functions.

As mentioned above, the processing means 204 are configured to transmit the colour data to the output means 206 in parts. In particular, the colour data for a portion of the LED strip is transmitted separately from that for other portions of the LED strip (in other words, the LED strip is virtually split into several parts). The Strip Divisor indicates the number of parts the LED strip is virtually split into. Since the Strip Divisor is 1 byte (8 bits) in length, the maximum number the Strip Divisor can take is 2 8 = 256. In other words, the LED-strip can be virtually split into at most 256 parts. The number of LEDs

Number of total LEDs in the LED strip

in each part is

Strip Divisor Since each LED requires only 3 bytes of RGB data (1 byte for each R, G and B value), the length of a data package can be significantly reduced via the virtual splitting of the LED strip. The transmission frequency is then equal to the Strip Divisor value.

In the CELS 200, each LED is separately addressed by means of the LED number in the addressing section. In particular, the LEDs along the LED strip are enumerated. This allows the exclusion of specific LEDs from being updated. In particular, only LEDs whose colours are to be changed are updated and hence, less data (and thus less packages) have to be sent. For example, if all the LEDs in a particular section do not need to be updated, the package for this section does not have to be sent. The enumeration of the LEDs along the particular section of the LED strip is stored under the "LED Number" in the data package for this section. The total size of this information i.e. Size (LED Number) is variable and is calculated as shown in Equation (1 ). The number "8" is included in the denominator of Equation (1 ) due to the encoding of the LEDs enumeration as illustrated in Fig. 8. In Fig. 7, the "Size (LED number)" is indicated as "1+n" bytes. Number of total LEDs

Size (LED Number) =

Strip Divisor x8

A specific LED can thus be addressed based on the section of the LED strip it belongs to and its position in this section. For instance, with a total of 96 LEDs along the LED strip and a Strip Divisor of 4, there would be 4 sections of 24 LEDs. The 56th LED would belong to the 3rd section which starts with the 49th LED. Thus, the 56th LED can be addressed through the 7th bit of the data package sent for the 3rd section of the LED strip.

Fig. 8 shows the encoding of the individual LEDs. In particular, each LED corresponds to a single bit of one byte of the LED number. A 0-bit indicates that there is no colour data for the corresponding LED and it is not necessary to update this particular LED. The number at the bottom of Fig. 8 shows the resulting decimal representation of each byte.

C-3-2) Other Communication Protocols

Apart from the transmission protocol, other protocols are also used in the CELS 200. The data packages sent according to these protocols comprise distinct headers that distinguish the data packages from other data packages sent based on other protocols. The protocols are described below.

Single transmission protocol. The single transmission protocol is also used for the transmission of RGB data but only for one LED. The basic structure of this protocol is similar to the above-described transmission protocol as shown in Fig. 9. This single transmission protocol serves to allow specific tasks to be assigned to certain LEDs. For example, a LED that is part of the CELS 200 can simultaneously serve as a reading lamp. When the user turns on the reading lamp, the single transmission protocol is used to address only this specific LED to turn its R, G. B values to values for maximum illuminance.

Initialization protocol. The initialization protocol is used to initialize the communication between the communication controller and the computer. Fig. 10 shows the initialization protocol and indicates the parameters that can be adjusted via this protocol.

Delay measurement protocol. The delay measurement protocol serves to transmit the delay measured by the communication interface device. Since one byte can only represent values up to 255, the protocol uses a four byte representation of a signed integer value. Therefore, it is possible to transmit delay values up to 2.1 billion microseconds. Fig. 11 shows the delay measurement protocol.

Acknowledgement protocol. An acknowledgement package based on an acknowledgement protocol is sent from the communication interface device to the computer to confirm the receipt of any packages based on the previously described protocols. Th e size of each data package sent based on this acknowledgement protocol is fixed to three bytes and only the acknowledge message changes according to the particular case. Table 2 shows the different acknowledgement messages to be included in each acknowledgement data package. These messages indicate the successful or unsuccessful transmission of a data package.

Message Purpose

MCU available The MCU has finished its boot sequence

MCU ready The MCU has received and successfully processed

the initialization protocol

MCU initialization failed The MCU processing of the initialization protocol has

failed

Correct transmission The transmission protocol has been received and

successfully processed

Incorrect start The start byte of the transmission protocol could not be found

Incorrect addressing The size of the addressing part . was wrong

Incorrect RGB-data The RGB-data is not a multiple of 3 bytes

Incorrect Strip Divisor The Strip Divisor is too big

Data left There was data left in the buffer although the whole

protocol was processed

Incorrect end The stop byte could not be found

Table 2

In the event of a failed or incomplete transmission, the stored colour data is overwritten by the previously received colour data. Hence, possibly corrupted data is not transmitted to the LED-strip and is not reflected via the output of the LED-strip. In addition, by overwritting the stored colour data even when the transmission fails or is incomplete, the refresh rate of the LED strip can be kept constant. Otherwise, if the LED strip is only refreshed after a retransmit of the corrupted data, there is likely going to be a delay in the refreshing of the LED strip. D) Program Framework of the CELS 200

Fig. 12 shows the tasks performed by the CELS 200 after the configuration and initialization of the CELS 200. In particular, input data in the form of an image is first acquired. This image is then converted into a different format for further processing. The representative colours in the image are then determined and the communication interface device then transmits these representative colours to the LEDs along the LED-strip via the LED drivers as described above. Fig. 13 shows the tasks performed by the CELS 200 of Fig. 12 together with two further blocks 1302 and 1304. There are two ways of configuring the CELS 200. The first way includes only block 1302 whereby the CELS 200 is initialized before any image is captured and is not re-configured during the subsequent operation of the CELS 200. The second way includes both blocks 1302 and 1304 whereby the CELS 200 is not only initialized but is also controlled during operation.

E) Process Structure of the Processing Device

The process structure of the processing device preferably fulfills the following requirements:

Enable the access to the input data for other applications. A method to enable other applications to access the input data is preferably implemented. Process the latest data. To avoid any asynchronous LED colour display, it is preferable that the CELS 200 processes the latest information. In particular, it is preferable for each succeeding task to have access to the latest data provided from the former task. Moreover, this is preferably fulfilled independent from any discrepancy in processing time between the various tasks.

The process structure of the processing device is preferably configured as follows.

Concurrent thread-safe framework. In order to reduce the delay caused by the processing of the tasks, the CELS 200 preferably operates the tasks at a high efficiency. Therefore, a multi-thread framework, where distinct tasks can run concurrently in their own thread, is preferably implemented in the CELS 200. Mechanisms which prevent data corruption due to this concurrency are also preferably implemented. Calibrate the CELS performance versus the necessary resources. The CELS 200 can be integrated in an automotive human-machine interface. Therefore, preferably, the computation power of the CELS 200 is not completely utilized in performing the tasks. In order to allocate the resources, one or several parameters are preferably provided to influence the CELS 200 performance. This is so as to reduce the amount of computation power required to perform the tasks.

Fixed LED-strip data update. The colour data of the LED-strip is preferably updated at fixed intervals to ensure a smooth and uniform colour transition. However, the simultaneous utilization of several different input sources and the possible variation of processing due to the concurrency (achieved by the multi-thread framework) can lead to a discrepancy in the data processing time of the input from the different input sources. Therefore, the process structure in CELS 200 is preferably configured to implement a mechanism which refreshes the LED-strip data independently from the processing times of different input sources.

E-1) Process Structure Implemented on the Processing Device of the CELS 200 embodiment

As illustrated in Fig. 14, the process structure implemented on the processing device is divided into four main parts, each representing a self-contained thread. This helps to fulfill the three requirements described above (as elaborated below).

Concurrent thread-safe framework. The whole CELS framework is embedded in a concurrent framework comprising four threads. Each thread has its distinct task and shares data with the preceding and succeeding thread. In order to prevent data corruption due to simultaneous data access, a thread-safe data structure is implemented.

Calibrate the CELS performance versus the necessary resources. The Image Capture block and the succeeding Image Conversion block and Representative Colour Determination block are divided into two threads. This enables an effective way to calibrate the CELS 200 performance. Since the processing thread is periodically executed with an adjustable rate, the necessary computation power can be calibrated by changing this rate. Hence, although the processing is independent from the image source frame rate, the execution rate of the resource-consuming image conversion can be calibrated. This calibration can be based on the resources which are available to the CELS 200 (taking into account that a lower execution rate results in a less frequent update of the colour data). Fixed LED-strip data update. The other division, which separates the Image Conversion block and the Representative Colour Determination block from the LED Colour Adjustment block and the MCU Communication block, helps in satisfying the requirement of a fixed rate in the LED-strip data update. The LED Colour Adjustment block is also periodically executed. Thus, it is decoupled from the former ImageProcessor class. In particular, the execution of the LED Colour Adjustment class is independent from both the number of input sources and the individual image processing times of these input sources. Therefore, the LED Colour Adjustment class processes the latest colour data available as provided by the preceding classes. Thus, the LED-strip is updated with a constant refresh rate.

F) Process Structure of the Communication Interface Device

F-1 ) Schematic of Process Structure Implemented on the Communication Interface Device

Fig. 15 shows the schematic of the process structure implemented on the communication interface device in the CELS 200.

At the beginning, the communication interface device runs the initialization routine. In this initialization routine, several variables are declared, the RS-232 communication is set to 115200/8-N-1 and a watchdog timer is set to two seconds. After the initialization routine, the communication interface device remains in an infinite while loop. This while loop contains two if-clauses. The first if-clause is satisfied once serial data has been received and the second if-clause is cyclically executed. Both are explained in the following subsections. F-1-1) Colour Data Protocol Processing

To satisfy the first-if clause, the program looks through the data the communication interface device receives to find a start byte and a header byte. Once these are found, the processing begins.

The RGB-Data Transmission protocol is used for transmitting each data package. The information regarding the strip partition (i.e. segment of the LED strip) and the addressed LED are extracted from the data package. Thus, the colour data can be directly written from the serial buffer into the distinct LED data structure. The LED data structure also provides an acknowledgement package upon successful receipt of the whole data package.

F-1-2) Cyclic LED-Strio Update

The second if-clause is cyclically executed and the following is implemented in each cycle.

The latest colour data written to the communication interface device is mixed with the previous colour data written to the communication interface device. This helps to avoid an abrupt colour change i.e. smoothen the colour change. Equation (2) is used to mix the colours. In particular, a mix-factor is pre-determined whereby this mix-factor determines the composition of the colours eventually displayed by the LEDs. In Equation (2), the "Final Colour Value" is a R, G or B value which is respectively determined by using Equation (2) with the latest R, G or B data written to the communication interface device ("Latest Data") and the previous R, G or B data written to the communication interface device ("Previous Data"). The colour displayed by the LEDs is then determined from the Final Colour Values (i.e. R, G and B values) obtained using Equation (2).

Final Colour Value = (Mix-Factor x Latest Data) +

((1 -Mix-Factor) x Previous Data) (2) In one example, the mix-factor is set as 0.5. Note that although in this embodiment, only the latest set of data and the previous set of data are mixed, in other embodiments, more data may be used to determine the final colour value. The data to be used may include any number of sets from the most recent predetermined number of (e.g. ten) sets of data written to the communication interface device. If more sets of data are used, the influence (weight) of each set is preferably adjusted such that it reflects when the set of data was written to the communication interface device. Preferably, the later the set of data written, the higher the weight given to the data. The weight of each set can for example be determined by first assigning an "age" to the set of data (e.g. from each later set of data to each earlier set of data, the "age" is incremented by 1 i.e. age of the latest set of data = 0, age of the most previous set of data = 1 , age of the set written just before the most previous set of data = 2 and so on). The weight for each set of data with a particular age can then be calculated as 1/(age+2).

Second, the Final Colour Values (R, G and B values) determined using Equation (2) are transmitted to the LED-strip via the LED drivers.

The above is cyclically executed, that is, executed every predetermined period of time (for example, 80ms). Thus, a constant LED colour refresh rate can be achieved. The cycle time (i.e. the predetermined period of time between changes in the LED colours) can be adjusted using the initialization protocol as shown in Fig. 10. Thus, the cycle time controls the degree of smoothness of the colour transformation.

Although it is possible to apply a time-triggered interrupt instead to update the LED colour data, such an interrupt can interfere with the receipt of serial data. This may result in the loss of transmitted data, making the whole data package invalid. In contrast, the loss of transmitted data occurs less frequently by using the if-clauses.

G) Representative Colour Calculation by the Computer

In this embodiment, the output means 206 comprise a LED-strip aligned along the roof liner. An image is repeatedly acquired by a camera through each window and is associated with a group of LEDs along the portion of the LED strip over the window. The image shows a view through the window at the instance it is captured. The image is two-dimensional with a width (which corresponds to the dimension of the view parallel to the LED strip over the window) and a height. The top of the image is nearer to the LED strip whereas the bottom of the image is further away from the LED strip.

The computer is configured to determine representative colours from each input image acquired. This is elaborated below.

G-1 ) Select an Area of Interest

Each input image comprises a plurality of pixels, each of which having a pixel value representing its colour. In particular, each pixel value is in the RGB space i.e. it comprises R, G, B sub-values indicating brightness of red, green and blue in the pixel.

To determine characteristics of the illumination to be provided from an input image, an area of interest is first selected from the input image. The colours of the illumination are then determined from only this area of interest.

In the CELS 200 embodiment, the input image comprises a plurality of lines and the area of interest is selected by first extracting a certain percentage (e.g. 10%) of the lines at the top of the image to form an initial area of interest (e.g. the initial area of interest may contain lines 0 - 108 of the input image) . Within the initial area of interest, lines are cyclically skipped over (for example, only every x th line in the initial area of interest is selected where x may be equal to 2) to form a final area of interest. Note that in alternative embodiments, an area of interest may not be selected and the whole image may be processed or a different method may be used to select the area of interest.

Selecting an area of interest from the image for further processing is advantageous as the JPEG conversion (i.e. conversion of the JPEG image from the RGB space to the HSV space) to be performed later on requires computation time. Selecting an area of interest helps to reduce the amount of image data that needs to be processed and due to this reduction, the computation time can be reduced. In effect, the computing time is nearly decreased by the same factor by which the regarded image data is reduced. This has been experimentally determined. G-2) Assign Weights to Pixels in the Area of Interest

The area of interest is then weighted using a weighting function. To elaborate, the area of interest comprises a plurality of pixels representing respective points in a view captured through a window adjacent the LED-strip and each pixel is given a weight that depends on the proximity of the point it represents to the LED-strip. The nearer the point is to the LED-strip, the greater the weight given to the pixel. In the CELS 200 embodiment, colour data from the bottom of the image representing the lower view from the window is taken less into account than that from the top of the image. More specifically, the area of interest is divided into three horizontal sections. The pixels in the top one-third of the area of interest are given a weight such that they account for 50% of the final representative colour and the pixels in the bottom two-thirds of the image are given a weight such that they account for 50% of the final representative colour. Thus, this weighting-method does not discard data but instead emphasizes each section of the area of interest with a user-specified weighting.

G-3) Divide the Area of Interest into Multiple Segments

The area of interest is then divided into multiple segments. Each segment is processed independently to determine the colours of the illumination. The number of segments is equal to the number of LEDs corresponding to the image (these LEDs' output are to be determined based on the image), so the illumination characteristics determined for each segment is projected by an associated LED along the LED-strip. Each segment has a height equal to the image height (which depends on the number of lines in the area of interest) and a width equal to the image width divided by the number of corresponding LEDs. In this embodiment, the area of interest is divided equally to obtain a number of image tiles with equal widths and heights. But in other embodiments, the area of interest can be divided in an uneven manner to obtain image tiles of different shapes and/or sizes.

G-4) Convert Each Segment into the HSB Colour Space

Each segment in the red, green, blue (RGB) colour space is then converted into the hue, saturation, brightness (HSB) colour space so as to enhance the image data in the segment. In particular, each pixel value in the segment is converted to H, S, B sub- values indicating hue, saturation and brightness of the pixel. In contrast to the RGB colour space, the colour and the brightness information in the HSB space are separated. Therefore, it is possible to increase the brightness of a particular colour without changing the actual colour. Fig. 16 shows representative colours determined without first converting the area of interest into the HSB space (see colours above the line) and with the conversion into the HSB space (see colours below the line).

G-5) Determine Representative Colour for Each Segment The representative colour of each image segment in the HSB space is then determined by determining the median colour of the image segment taking into account the weights of the pixels in the image segment (i.e. by determining the weighted median colour of the image segment). In particular, a weighted median H value, a weighted median S value, a weighted median B value are obtained by taking the weighted median of the H, S and B values of the pixels in the image segment. In other embodiments, the weighted mean or weighted mode of the pixel values in each image segment may be calculated instead.

The weighted median H, S, B values are then converted back into representative R, G, B values. Thus, the image data of each segment is condensed into one representative colour which is a mixture of the three R, G, B colours.

H) The Processing Delay Compensation Because the processing performed by the processing means 204 requires time, there is a delay between when the input data is provided and when the LED-strip outputs the coloured light (although the processing time may not be the only cause for the delay).

Hence, the computer is configured to take into consideration the delay between the providing of the input data and the providing of the illumination to the interior of the vehicle when determining the characteristics of the illumination. H-1 ) The Delay Composition

The delay between an image capture and an output from the LED-strip comprises delay from the following processes:

• Capturing Delay: The time between the moment an image was taken and the moment it is transmitted to the computer.

• Processing Delay: The time, which the computer needs to receive and process the input data in order to send the colour data to the communication interface device.

• Communication Processing Delay: The time, which the communication interface device needs to process the specified data protocol. · Communication Delay: The time, which is necessary to communicate with the LED- strip driver and to set the colour.

Considering that the vehicle is moving, the above delay leads to an undesirable difference between the current view from the window and the colours which the LEDs show. This is illustrated in Fig. 17 (top). Depending on the car speed and the outside environment, the colours can in fact change rapidly. This can cause flicker or harsh colour changes. This asynchronous relation between the displayed colours and the environment deteriorates the visual perception enlargement effect. Therefore, the delay has to be taken into account when determining the representative colour. Taking the delay into account helps to achieve a more synchronous relation between the displayed colours and the environment as shown in Fig. 17 (bottom).

REFERENCES

[BA06] John D Bullough and Yukio Akashi. Impact of surrounding illumination on visual fatigue and eyestrain while viewing television. Journal of Applied Sciences 6, 2006.

[DH07] Elmo M. a. Diederiks and Henriette (Jettie) C. M. Hoonhout. Radical

Innovation and End-User Involvement: The Ambilight Case. Knowledge, Technology & Policy, 20(1 ):31-38, July 2007.

[Goo12] Google. Google Street View Image API, 2012. [Online; accessed:

06.01.13]. URL: https://developers.google.com/maps/

documentation/streetview/.

[Gre08] Franz Grehn. Augenheilkunde. Springer Medizin Verlag

Heidelberg,2008.

[JRAAOO] Ralph E Jacobson, Sidney F Ray, Geoffrey G Attridge, and Norman R

Axford. The Manual of Photography. Focal Press, 2000.

[KGS05] Rainer Klinke, Riidiger Gay, and Stefan Silbernagl. Physiologie.

Thieme Verlag, Stuttgart, 2005.

[Kno08] Uirich Knorra. Designstudie soil Wohlfuhlambiente der Zukunft vermitteln. ATZ 04/2008 Jahrgang 110, 10, 2008.

[KP99] James Kelly and Steffen Pietzonka. Neue Lichtqualitat fur das

Innenraum-Design. Automobiltechnische Zeitschrift 101 (1999) 9, 101 ,1999.

[MT08] Ansgar Meroth and Boris Tolg. Infotainmentsysteme im Kraftfahrzeug.

Vieweg, Wiesbaden, 1 edition, 2008.

[PW00] Steffen Pietzonka and Herbert Wambsganft. Lichtblicke— Ambiente Permanentbeleuchtung im Innenraum. ATZ-Automobiltechnische Zeitschrift, 102, 2000.

Steffen Pietzonka, Hubert Zwick, and Maik Bluhm. Technology and Design New Possibilities for Automotive Interior Lighting Concepts. ATZ worldwide 3/2004 Volume 106, 106:6-9, 2004.

Michael Reichenbach. Die LED iiberholt das Xenonlicht.

ATZ-Automobiltechnische Zeitschrift, pages 16- 8, 2011.

Irvin Rock. Indirect perception. The MIT Press, 1997.

Ltd. Rohm Semiconductor Co. LED New Products, 2011. [Product sheet].

[Sta] Statista. Number of cars sold worldwide from 990 to

2013. [Online; accessed: 12.03.13]. URL: http:

//www.statista.com/statistics/200002/

international-car-sales-since-1990/.

[SVK07] Pieter Seuntiens, Ingrid Vogels, and Arnold Van Keersop. Visual

Experience of 3D-TV with pixelated Ambilight. Proceedings of PRESENCE, pages 339-344, 2007.

[Ten13] Daniel Tencer. Number Of Cars Worldwide Surpasses 1 Billion; Can

The World Handle This Many Wheels? 2013. [Online; accessed: 14.03.13].

[vdH08] Peter van den Hurk. A visionary experience. Password, (31 ), 2008.

[Vie] Christof Vieweg. Wenn das Auto zum neuen Lebensraum

wird. [Online; accessed: 14.03.13]. URL:

http://www.welt.de/motor/article12447504/

Wenn-das-Auto-zum-neuen-Lebensraum-wird.html. [WBWH07] Burkard Wordenweber, Peter Boyce, Jorg Wallaschek, and Donald D.

Hoffman. Automotive Lighting and Human Vision. Springer Berlin Heidelberg,

Berlin, Heidelberg, 2007.

[Wit13] Nicola Wittekindt. Exploring The Future Of Urban Electromobility.

Pushing Frontiers 3 /2013, 2013.

[WR10] Henning Wallentowitz and Konrad Reif. Handbuch

Kraftfahrzeugelektronik: Grundlagen - Komponenten - Systeme - Anwendungen. 2010