Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
ADAPTIVE TRANSPARENT DISPLAY SYSTEM AND METHOD FOR ADAPTIVE OPTICAL SHIELDING
Document Type and Number:
WIPO Patent Application WO/2017/134629
Kind Code:
A1
Abstract:
A method for adaptively shadowing bright light in a field of view seen by an observer. The method includes receiving, from a first image sensor facing a scene in front of the transparent display, light intensity data indicative of brightness of a light received from a light source in the scene; determining a location of the light source relative to the transparent display device based on the light intensity data; receiving, from a second image sensor facing an observer located in a line of sight of a transparent display, image data representative of an image of the observer, determining a location of the observer relative to the transparent display based the image data; and, controlling the transparent display based on the location of the observer and the location of the light source to adaptively adjust a transparency at one or more areas of the transparent display to at least partially block light received from the light source from passing through the one or more areas of the transparent display.

Inventors:
POULSEN, Jens Kristian (505 Landgren Court, Kitchener, Ontario N2A 4J8, N2A 4J8, CA)
Application Number:
IB2017/050614
Publication Date:
August 10, 2017
Filing Date:
February 03, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
POULSEN, Jens Kristian (505 Landgren Court, Kitchener, Ontario N2A 4J8, N2A 4J8, CA)
International Classes:
G02F1/01; B60J1/20; B60J3/04; B64C1/14; B64D11/00; G02B27/01; G02C7/10
Domestic Patent References:
WO2015169018A12015-11-12
WO2015128158A12015-09-03
Foreign References:
US20090096937A12009-04-16
DE19734307A11999-02-11
US9445639B12016-09-20
DE102008011086A12008-07-10
Attorney, Agent or Firm:
BERESKIN & PARR LLP/S.E.N.C.R.L., SRL (40 King Street West, 40th FloorToronto, Ontario M5H 3Y2, CA)
Download PDF:
Claims:
What is claimed is:

1. A system comprising:

a transparent display device;

a first image sensor facing a scene in front of the transparent display device, the first image sensor configured to measure an intensity of light in the scene, and generate light intensity data indicative of brightness of the light received from a light source;

a second image sensor facing an observer located in a line of sight of the transparent display device, the second image sensor configured to obtain an image of the observer and generate image data representative of the image;

a processor coupled to the transparent display device, the first image sensor, and the second image sensor, the processor configured to:

receive the light intensity data from the first image sensor;

determine a location of the light source relative to the transparent display device based on the light intensity data;

receive the image data from the second image sensor;

determine a location of the observer relative to the transparent display device based on the image data; and,

control the transparent display device based on the location of the observer and the location of the light source to adaptively adjust a transparency at one or more areas of the transparent display device to at least partially block light received from the light source from passing through the one or more areas of the transparent display device.

2. The system of claim 1 , wherein the processor is further configured to adaptively adjust the transparency at the area of the transparent display device by darkening the one or more areas of the transparent display.

3. The system of claim 1 , wherein the transparent display device is integrated to a medium selected from a member of the group consisting of a windshield of a vehicle, a windscreen of an airplane, a rear-view mirror of a vehicle, a side mirror of a vehicle, a visor of a helmet, and an eye glass lens.

4. The system of claim 1 , further comprising a filter disposed in front of the first image sensor for reducing the intensity of the light received from the light source.

5. The system of claim 4, wherein the first image sensor is coupled to the filter and further configured to adjust a sensitivity of the filter to the light received from the light source.

6. The system of claim 1 , further comprising an infrared light source for illuminating the observer, and wherein the second image sensor is configured to respond to infrared light.

7. The system of claim 1 , wherein the processor is coupled to a controller of a vehicle, and wherein the processor is further configured to receive information from the controller of the vehicle and render on the transparent display device the information received from the controller of the vehicle.

8. The system of claim 1 , wherein the first image sensor comprises at least two image sensors configured to obtain first distance information indicative of a distance of the bright object relative to the transparent display device, and the processor is further configured to control the transparent display based on the first distance information to adaptively adjust a transparency at the one or more areas of the transparent display device to shade the one or more areas of the transparent display.

9. The system of claim 1 , wherein the second image sensor comprises at least two image sensors configured to obtain second distance information indicative of a distance of the observer relative to the transparent display device, and wherein the processor is further configured to control the transparent display device based on the second distance information to adaptively adjust a transparency at the one or more areas of the transparent display device to shade the one or more areas of the transparent display.

10. The system of claim 1 , wherein the transparent display device comprises one or more transparent display screens.

1 1. The system of claim 1 , wherein the location of the observer is indicative of a location of each eye of the observer relative to the transparent display device.

12. The system of claim 1 1 , wherein the first image sensor is further configured to measure an intensity of light in the scene received from a plurality of light sources, and generate light intensity data indicative of brightness of each of the plurality of light sources, and wherein the processor is further configured to: receive, from the first image sensor, light intensity data indicative of brightness of light received from each of the plurality of light sources, determine a location of each of the plurality of light sources relative to the transparent display device based on the light intensity data, and control the transparent display device based on the location of each light source and the location of the observer to adaptively adjust a

transparency at plurality of areas of the transparent display device to at least partially block light received from each light source from passing through each of the plurality of areas of the transparent display device.

13. The system of claim 1 , further comprising:

a camera configured to obtain an image of a scene behind the observer and generate image data representative of the image behind the observer.

14. The system of claim 13, wherein the processor is coupled to the camera and further configured to receive the image data representative of the image behind the observer, and render the image behind the observer on the transparent display device.

15. The system of claim 1 , wherein the processor is further configured to repeatedly receive light intensity data over a time period, predict the location of the light source relative to the transparent display device based on the received light intensity data and control the transparent display device based on the location of the observer and the predicted location of the light source to adaptively adjust the transparency at the one or more areas of the transparent display device to at least partially block light received from the light source from passing through the one or more areas of the transparent display device.

16. A method comprising:

receiving, from a first image sensor facing a scene in front of a transparent display device, light intensity data indicative of brightness of a light received from a light source in a scene observed at the first image sensor;

determining a location of the light source relative to the transparent display device based on the light intensity data;

receiving, from a second image sensor facing an observer located in a line of sight of a transparent display device, image data representative of an image of the observer obtained at the second image sensor,

determining a location of the observer relative to the transparent display device based the image data; and,

controlling the transparent display device based on the location of the observer and the location of the light source to adaptively adjust a transparency at one or more areas of the transparent display device to at least partially block light received from the light source from passing through the one or more areas of the transparent display device.

17. The method of claim 16, wherein adaptively adjusting the transparency at the one or more areas of the transparent display device comprises darkening the one or more areas of the transparent display device.

18. The method of claim 16, further comprising:

receiving information from the controller of a vehicle; and,

rendering on the transparent display device the information received from the vehicle.

19. The method of claim 16, further comprising:

repeatedly receiving light intensity data over a time period;

predicting the location of the light source relative to the transparent display device based on previously received light intensity data; and,

controlling the transparent display device based on the location of the observer and the predicted location of the light source to adaptively adjust the transparency at the one or more areas of the transparent display device to at least partially block light received from the light source from passing through the one or more areas of the transparent display device.

20. A non-transitory computer-readable medium storing computer-readable instructions which when executed by a processor of a computing device causes the computing device to:

receive, from a first image sensor facing a scene in front of a transparent display device, light intensity data indicative of brightness of a bright light received from a light source in the scene obtained at the first image sensor;

determine a location of the light source relative to the transparent display device based on the light intensity data;

receive, from a second image sensor facing an observer located in a line of sight of a transparent display device, image data representative of an image of the observer obtained at the second image sensor,

determine a location of the observer relative to the transparent display device based the image data;

and,

control the transparent display device based on the location of the observer and the location of the light source to adaptively adjust a transparency at one or more areas of the transparent display device to at least partially block light received from the light source from passing through the one or more areas of the transparent display device.

Description:
ADAPTIVE TRANSPARENT DISPLAY SYSTEM AND METHOD FOR ADAPTIVE OPTICAL

SHIELDING

FIELD

[0001] The present invention relates to a system and method for providing brightness control using transparent displays and mirrors. More particularly, the present invention relates to a system and method for controlling the amount of light that is transmitted in different paths based on the brightness level of light. This may be used to provide adaptive shading on front screens, shields, glasses, mirrors including back and side mirrors, helmets and similar devices intended to be placed in the path of vision between the human eye and the environment or for improving the performance of cameras by increasing the dynamic range.

BACKGROUND

[0002] When driving a vehicle at night, a driver of a vehicle often faces the headlight's of another vehicle travelling in the opposite direction. A driver of the other vehicle, while approaching the vehicle, may be driving with their vehicle's highbeam headlights turned on to better view the road. Often a driver of a vehicle approaching another vehicle travelling in the opposite direction forgets to turn off the highbeams of their vehicle, which can temporarily limit or block the view of the driver of the other vehicle due to a temporary reduction in the sensitivity to light of the driver.

[0003] Similarly, when driving a vehicle at sunrise or sunset, a driver of the vehicle may find it difficult to see the road when the sun is near the horizon (e.g. during sunrise or sunset, or during longer periods of time in some far northern or southern regions), because the dynamic range of the human eye is limited. A driver of a vehicle may limit the problems associated with being blinded by the sun somewhat by properly adjusting a sun shade screen of the vehicle which is located in front of the driver between the driver and the windshield of the vehicle. However, there are many situations where using the sun is located at the same height of the horizon as a traffic light. If a driver of the vehicle adjusts the sun shade screen to block the sun to view the traffic light, the view of the driver may become obstructed.

However, if the driver does not adjust the sun shade screen of a vehicle to block the sun, the bright sun light can burn the eyes of the driver while the driver is looking at the traffic light. Similarly, when driving at night, it is not possible to simultaneously view the road and block the light from a passing driver. Moreover, it can be onerous for a driver of a vehicle to repeatedly move a sun blocking screen of the vehicle up and down whenever another vehicle travelling in the opposite vehicle passes the vehicle or changing direction towards the sun.

[0004] Pilots of aircraft may be temporarily blinded when flying directly against the sun. Commercial pilots are subject to a safety rule in which they are required to close one eye when flying against bright objects at night to at least maintain some night vision by reopening the closed eye after passing the light source. In combat situations, there may be similar challenging visual environments or even problems with a pilot being blinded by hostile laser light.

[0005] Another situation, where lighting conditions pose a serious problem is motorcyclists that do not have the ability to adjust a sun screen and may be facing the sun directly with the result of being partially or fully blinded thereby presenting difficult and dangerous driving conditions. Similarly, when driving at night and another vehicle passes a vehicle from behind, or trails closely behind the vehicle, a driver of the vehicle may become temporarily blinded by the light from the headlights of the other vehicle passing or trailing behind that reflects off the rear-view and/or side mirrors of the vehicle. In some vehicles, it is possible to set a rear-view mirror to night mode to reduce the glare associated with another vehicle passing from behind the vehicle, however, when a rear-view mirror operates in night mode, the light is attenuated, which significantly reduces the ability of the driver of the vehicle to view the road behind. Furthermore, side mirrors generally do not reduce the glare associated with another vehicle passing from behind the vehicle, and thus reflections of light from headlights of another vehicle approaching the vehicle from behind can temporarily blind the driver of the vehicle. SUMMARY

[0006] The present invention provides a novel system and method for providing adaptive shading either provided as a transparent display placed between the human eye or an image sensor and a bright object or implemented as a mirror that includes adaptive screening. The term adaptive shadowing or screening means a display or mirror that change the amount of light that passes through the display or mirror according the current light conditions and scenery. The present invention relates to using an adaptive transparent display system to provide a dynamic shadowing of the observers view thereby significantly reducing the problems associated with passing bright objects while still enabling a full view of the surroundings. Furthermore, while the present invention may be implemented with image sensors such as cameras and non-transparent displays, many advantages are offered by using transparent displays, because this will offer an unaltered view of the surroundings and without incurring any limitation as to the resolution of the display. In other words, a

transparent adaptive display system of the present invention allows a normal unaltered view of the surroundings while glare from bright objects is reduced due to the adaptive shadowing provided by the adaptive transparent display system. Therefore, the present invention is applicable to multiple uses such as adaptive sunscreens, adaptive sunglasses, adaptive shadowing of screens for helmets and is applicable to land, aviation and marine use.

[0007] Another implementation uses the adaptive shadowing in a combination with an image sensor e.g. CMOS or CCD cameras to provide a significantly improved dynamic range, e.g. for professional digital movie capture or advanced photography.

[0008] Normally the rear-view mirror of a vehicle has a night setting mode where the amount of reflected light is significantly reduced. The advantage of this system is that the beams from headlights of vehicles approaching from behind the vehicle will not blind the eyes of the driver, but has the disadvantage that almost the entire view disappears from the rear- view mirror due to the combination of low light conditions and a reduced reflection from the rear-view mirror. Therefore, the driver will be able to see the headlights of vehicles located behind and street lamps but have almost no other view. This poses a risk in situations such as pedestrians or cyclists approaching from behind without having a light source and thereby may go unseen. Furthermore, the side mirrors of the vehicle will still tend to temporarily blind the driver whenever another driver is overtaking the vehicle because the side mirrors do normally not include a night mode setting. This is another problem with the current situation.

[0009] In some implementations, the same principles shall be applied to mirrors located inside or outside a vehicle. In this implementation, the rear-view and side mirrors include an adaptive transparent display so that a portion of the scenery, typically the areas that reflect bright objects, are being attenuated or blocked. This means that the reflected image will have a different brightness in these optically attenuated areas as compared to the original directly reflected image which is a clear improvement over the present situation, where all areas of the center back mirror are attenuated equally when activating the night reflection mode, that is being used the reduce the glare from beams from behind. This may be accomplished by combining a transparent display with a mirror thereby enabling the amount of light that is is reflected to be variable according to the intensity of the light. The advantage of this solution as compared to the combination of a camera and a display is that the mirror will have a virtually infinite resolution, while a display will be limited to the number of pixels displayed. In order to provide correct attenuation in the appropriate portions of the display, it is important to get information about the position and brightness of objects and the position of the eye(s) or image sensors facing these objects. An image sensor such as a camera may be used to track the movement of the eyes inside the vehicle. This is important because the direct or reflected light will have different paths to different objects. Therefore, it is important to both know the position of the transmitter and receiver of the light. In some implementations, the camera facing the observer may be sensitive to infrared light and an infrared light source may be used inside the cabin to provide adequate illumination of the eyes to provide enough backlight to correctly estimate the position of the eyes even in dark light conditions.

[0010] According to a first aspect of the present invention, there is provided a method of providing adaptive shadowing of light based on the brightness of and position of light from one or more directions and the position of the observer of this light, where the observer can be a human being or a general image sensor.

[0011] Thus, the present invention can be seen as an extension of current solutions, that provide a fixed or adjustable attenuation of light, but only provides a constant attenuation in all directions irrespective of the variations in the current scenery.

[0012] Other features and advantages of the present invention are described more fully below.

BRIEF DESCRIPTION OF THE DRAWINGS

[0013] Implementations of the present invention will be described, by way of example, with reference to the drawings and to the following description, in which:

[0014] Figure 1 shows an adaptive transparent display system to be placed as a display in front of the driver or as part of the front screen including a position sensor such as a camera facing towards the driver, and an image sensor such as a camera facing the driving direction and a processing unit.

[0015] Figure 2 shows a helmet that includes the adaptive transparent display system. [0016] Figure 3 shows a helmet that includes the adaptive transparent display system and additional features such as auditory enhancements and a rear camera.

[0017] Figure 4 shows a pair of sunglasses that includes an adaptive transparent display system for adaptively adjusting the shading of the glasses.

[0018] Figure 5 shows a back mirror that includes the adaptive transparent display system.

[0019] Figure 6 shows a side mirror that includes the adaptive transparent display system.

[0020] Figure 7 shows small displays placed at the corners of the instrument panel of a car suited to replace the standard side mirrors.

[0021] Figure 8 shows a transparent display device simultaneously providing shading and relevant information to the driver that may be projected onto the front shield.

[0022] Figure 9 shows a figure of a camera that includes a transparent display device controlled by an image processor. The image processor increases the transparency in darker sections of the image sensor and darkens the transparency in illuminated sections. This will increase the dynamic range of the camera and reduce problems with saturation in bright areas.

[0023] Figure 10 shows an adaptive transparent display system to be placed as a display in front of the driver or as part of the front screen including a position sensor such as a camera facing towards the driver that is wearing a pair of 3D-glasses and an image sensor such as a camera facing the driving direction and a processing unit. The 3D glasses allows the wearer to see an unobstructed view without any ghost images from the darkened portions in the path of the other eye.

[0024] Figure 1 1 shows a block diagram of a method that implements the present invention when providing adaptive shadowing for an image sensor such as an eye based on image brightness information and position of the image sensor.

[0025] Figure 12 shows a block diagram of a method that implements the present invention to provide adaptive shadowing for an image sensor such as an eye based on image brightness information and a manual adjustment of the desired positioning of the adaptive shadowing.

[0026] Figure 13 shows a block diagram of a method that implements the present invention to provide adaptive shading of an image sensor based on the brightness of individual pixels and using this information to selectively provide dynamic shadowing of these pixels.

[0027] Figure 14 shows a block diagram of a method for providing adaptive eye shadowing for increased visibility using automatic eye tracking.

[0028] Figure 15 shows a graphical representation of a method of determining a location or position of an observer PD.

[0029] Figure 16 shows graphical representation of an example of a method of obtaining coordinates of a bright object.

[0030] Figure 17 shows a graphical representation of an example of method of

determining a distance to the observer PD or a bright object using two cameras.

[0031] Figure 18 shows a graphical representation of an example of a method of determining a distance to the observer PD using a single camera.

[0032] Figure 19 shows a block diagram of an adaptive transparent display system for adaptively shadowing or shielding bright light in a field of view seen by an observer in accordance with an example implementation of the present specification.

[0033] Figure 20 shows a flowchart of a method for adaptively shadowing bright objects in a field of view seen by an observer in accordance with an example implementation of the present specification.

DETAILED DESCRIPTION

[0034] For simplicity and clarity of illustration, reference numerals may be repeated among the figures to indicate corresponding or analogous elements. Numerous details are set forth to provide an understanding of the implementations described herein. The implementations may be practiced without these details. In other instances, well-known methods, procedures, and components have not been described in detail to avoid obscuring the implementations described. The description is not to be considered as limited to the scope of the

implementations described herein.

[0035] In the present disclosure, elements may be described as "configured to" perform one or more functions or "configured for" such functions. In general, an element that is configured to perform or configured for performing a function is enabled to perform the function, or is suitable for performing the function, or is adapted to perform the function, or is operable to perform the function, or is otherwise capable of performing the function.

[0036] It is understood that for the purpose of this disclosure, language of "at least one of X, Y, and Z" and "one or more of X, Y and Z" can be construed as X only, Y only, Z only, or any combination of two or more items X, Y, and Z (e.g., XYZ, XY, YZ, ZZ, and the like).

Similar logic can be applied for two or more items in any occurrence of "at least one ..." and "one or more..." language.

[0037] For the purposes of the present disclosure, bright light is light received from a light source, such as, for example, light emitted by the sun or the headlights of a vehicle, that appears in a field of view of an observer and has an intensity that exceeds a predetermined level at a medium. A light source is an object and/or a bright object that is emitting and/or reflecting light. A medium is any suitable instrument that passes or reflects light that is incident on a surface of the instrument, such as, for example, a windshield of a vehicle, windscreen of aircraft or watercraft, a window of a vehicle, a rear-view mirror of a vehicle, a side mirror of a vehicle, an eye glass lens, and the like.

[0038] The present disclosure generally relates to a system and method for adaptively shadowing or shielding bright light in a field of view (or scene) seen by an observer to alter an amount of light seen by the observer to mitigate the problems associated with viewing bright light. In other words, the system and method of the present disclosure allows an observer an unaltered view of a scene either through the medium (when the medium is transparent) or reflected off of the medium (when the medium is a mirror) while reducing glare on the medium from bright light received from one or more light sources.

[0039] An aspect of the present specification provides a system that includes a

transparent display device, a first image sensor facing a scene in front of the transparent display device, the first image sensor configured to measure an intensity of light in the scene received from a light source, and generate light intensity data indicative of brightness of the light received from the light source; and a second image sensor facing an observer located in line of sight of the transparent display device, the second image sensor configured to obtain an image of the observer and generate image data representative of the image. The adaptive display also includes a processor coupled to the transparent display device, the first image sensor, and the second image sensor. The processor is configured to: receive the light intensity data from the first image sensor; determine a location of the light source relative to the transparent display device based on the light intensity data; receive the image data from the second image sensor; determine a location of the observer relative to the transparent display device based on the image data; and, control the transparent display device based on the location of the observer and the location of the light source to adaptively adjust a transparency at one or more areas of the transparent display device to at least partially block light received from the light source from passing through the one more areas of the

transparent display device.

[0040] The processor may be further configured to adaptively adjust the transparency at the one or more areas of the transparent display device by darkening the one or more areas of the transparent display device.

[0041] The transparent display device may be integrated to a medium selected from a member of the group consisting of a windshield of a vehicle, a windscreen of an airplane, a windscreen of watercraft, a rear-view mirror of a vehicle, a side mirror of a vehicle, a visor of a helmet, and an eye glass lens.

[0042] The system may further include a filter disposed in front of the first image sensor for reducing the intensity of the light received from the light source.

[0043] The first image sensor may be coupled to the filter and further configured to adjust a sensitivity of the filter to the light received from the light source.

[0044] The system may further include an infrared light source for illuminating the observer and the second image sensor may respond to infrared light.

[0045] The system may utilize an alternate position sensor such as ultrasound or electromagnetic radiation sensor to determine the location or position of the observer.

[0046] In some implementations, the location or position of the observer's eyes may not be known and instead the system may increase the size of the one or more darkened areas to compensate for this.

[0047] The processor may be coupled to a controller of a vehicle, and the processor may be further configured to receive information from the controller of the vehicle and render on the transparent display device the information received from the controller of the vehicle.

[0048] The first image sensor may include at least two image sensors configured to obtain first distance information indicative of a distance of the light source relative to the transparent display device, and the processor is further configured to control the transparent display based on the first distance information to adaptively adjust a transparency at the one or more areas of the transparent display device to shade the one or more areas of the transparent display.

[0049] The second image sensor may include at least two image sensors configured to obtain second distance information indicative of a distance of the observer relative to the transparent display device, and the processor is further configured to control the transparent display device based on the second distance information to adaptively adjust a transparency at the one or more areas of the transparent display device to shade the one or more areas of the transparent display.

[0050] The transparent display device may include one or more transparent display screens.

[0051] The location of the observer may be indicative of a location of each eye of the observer relative to the transparent display device.

[0052] The first image sensor may be further configured to measure an intensity of light in the scene received from a plurality of light sources, and generate light intensity data indicative of brightness of each of the plurality of light sources and the processor may be further configured to receive, from the first image sensor, light intensity data indicative of brightness received from each of the plurality of light sources; determine a location of each of the plurality of light sources relative to the transparent display device based on the light intensity data, and control the transparent display device based on the location of each light source and the location of the observer to adaptively adjust a transparency at the plurality of areas of the transparent display device to at least partially block light received from each of the plurality of light sources from passing through each of the plurality of areas of the transparent display device.

[0053] The system may further include a camera configured to obtain an image of a scene behind the observer and generate image data representative of the image behind the observer.

[0054] The processor may be coupled to the camera and further configured to receive the image data representative of the image behind the observer, and render the image behind the observer on the transparent display device. [0055] The processor may be further configured to repeatedly receive light intensity data over a time period, predict the location of the light source relative to the transparent display device based on the received light intensity data and control the transparent display device based on the location of the observer and the predicted location of the light source to adaptively adjust the transparency at the one or more areas of the transparent display device to at least partially block light received from the light source from passing through the one or more areas of the transparent display device.

[0056] Another aspect of the present specification provides a method comprising:

receiving, from a first image sensor facing a scene in front of the transparent display device, light intensity data indicative of brightness of a light received from a light source in the scene; determining a location of the light source relative to the transparent display device based on the light intensity data; receiving, from a second image sensor facing an observer located in a line of sight of the transparent display device, image data representative of an image of the observer obtained at the second image sensor, determining a location of the observer relative to the transparent display device based the image data; and, controlling the transparent display device based on the location of the observer and the location of the light source to adaptively adjust a transparency at one or more areas of the transparent display device to at least partially block light received from the light source from passing through the one or more areas of the transparent display device.

[0057] Adaptively adjusting the transparency at the area of the transparent display device may include darkening the one or more areas of the transparent display.

[0058] The method may further include receiving information from the controller of the vehicle; and, rendering on the transparent display device the information received from the vehicle.

[0059] The method may further include: repeatedly receiving light intensity data over a time period; predicting the location of the light source relative to the transparent display device based on previously received light intensity data; and, controlling the transparent display device based on the location of the observer and the predicted location of the light source to adaptively adjust the transparency at the one or more areas of the transparent display device to at least partially block light received from the light source from passing through the one or more areas of the transparent display device. [0060] Another aspect of the present specification provides a non-transitory computer- readable medium storing computer-readable instructions which when executed by a processor of a computing device causes the computing device to: receive, from a first image sensor facing a scene in front of the transparent display device, light intensity data indicative of brightness of a light received from a light source in the scene; determine a location of the light source relative to the transparent display device based on the light intensity data;

receive, from a second image sensor facing an observer located in a line of sight of the transparent display device, image data representative of an image of the observer obtained at the second image sensor, determine a location of the observer relative to the transparent display device based the image data; and, control the transparent display device based on the location of the observer and the location of the light source to adaptively adjust a transparency device at one or more areas of the transparent display device to at least partially block light received from the light source from passing through the one or more areas of the transparent display device.

[0061] Figure 1 shows a transparent adaptive display system for vehicles and the like to provide adaptive attenuation in the direction of bright light sources, thereby improving the field of view of the driver, pilot or captain because they will no longer be blinded the sun or other bright objects. The transparent adaptive display system includes a camera to obtain a view of the scenery and provides tracking of the position of the drivers or pilots eyes to provide selective shadowing towards bright objects in the direction of view.

[0062] Referring now to Figure 1 , a schematic representation of a transparent adaptive display system 100 implemented in an adaptive frontshield to provide improved viewing conditions for a driver, pilot, captain or the like is shown. The driver's head 1 15 and eyes, 109-1 , 109-2 and the light source 1 17 are not part of the present invention. These objects have been included in the drawing in order to show the functionality of the transparent adaptive display system. The transparent adaptive display system of the present invention includes the following components: a transparent display device 101 (e.g. an LCD display) that is able to darken or partially darken areas based on information obtained from front facing camera 105-1 (e.g. a first image sensor) that is used to obtain a measurement of the environmental light conditions and rear facing camera 105-2 (e.g. a second image sensor) that is used to obtain a view of the driver, pilot, captain in order to estimate the location or position of the eyes 109-1 and 109-2 of the driver relative to the transparent display device 101. The transparent display device 101 is placed behind a front shield 102.

[0063] Shown in Figure 1 are darkened areas 103-1 and 103-2 that provide attenuation of light received from the light source 1 17 so that the eyes 109-1 and 109-2 of the driver are protected from the light source 1 17 and the amount of light from the light source 1 17 that reaches the eyes 109-1 and 109-2 are significantly reduced. This enables a clear view of the surroundings including the light source 1 17 and avoids straining the eyes 109-1 and 109-2 by avoiding a direct unattenuated line of sight to the light source 1 17. Furthermore, in Figure 1 darkened fields 107-1 and 107-2 are shown that indicate the light from the light source 1 17 has passed through darkened areas 103-1 and 103-2 and is now attenuated.

[0064] The signal processing unit 1 13 receives images from 105-1 and 105-2 and based on these images generates desired dark areas 103-1 and 103-2 to be activated on the active transparent display device 101.

[0065] Shown in Figure 1 is the field of view 1 1 1 -1 and 1 1 1 -2 of the cameras 105-1 and 105-2 respectively. In some implementations, an optional light source 1 19, e.g. an infrared light source, may provide illumination of the driver's head 1 15, thereby providing better tracking of the eyes 109-1 and 109-2 of the driver. The advantage of using an infrared light source 1 19 is, that while the camera 105-2 may be sensitive to these wavelengths, the human eye is not and therefore the light will not disturb the driver's view even though it provides illumination of the driver's head 1 15 and eyes 109-1 and 109-2. The position of the eyes 109-1 and 109-2 is used to calculate which portions of the image that needs to be attenuated.

[0066] In some implementations, the infrared light source 1 19 may not be needed. This is just an illustration of the general principles and the present invention is not limited to the implementation shown in Figure 1. In some implementations, the cameras 105-1 and 105-2 may be combined into a single camera by utilizing mirrors, lenses and the like in order to collect image information from two or more directions utilizing a single camera. In some implementations, more than two cameras may be used to obtain image and distance information.

[0067] In some implementations, only the forward facing camera 105-1 may be

implemented and the driver may manually adjust the position of the darkened areas to coincide with the current view to match the position of the drivers eyes. Even though a single light source 1 17 is shown on Figure 1 , multiple light sources could be present, e.g. two headlights from a passing vehicle pointing towards the driver. In some cases, there may be no lights sources present in the scenery but only strong variations in the reflected light intensity. In these cases, multiple darkened areas would be displayed on the transparent display device 101. In some implementations, two front facing cameras instead of just one (105-1 ) may be used in order to obtain distance information in addition to the brightness information. This may be used for enhanced safety measures such as warnings of an impending collision or automatic brake control, during parking procedures or for better control of the adaptive shading. Similarly, in some embodiments, two back facing cameras instead of just one (105-2) may be used in order to obtain distance information in addition to the brightness information. This may be used for better control of the adaptive shading. In some embodiments, the functionality of the front (105-1 ) and back (105-2) facing cameras may be swapped, in case the user flips the adaptive transparent display system to a different position, e.g. to the left side of the driver instead of in front of the driver.

[0068] In some implementations, camera 105-2 may be replaced by another position sensor, such as ultrasonic or electromagnetic sensor configured to determine a location or position of the eye's 109-1 , 109-2 of the driver.

[0069] In some implementations, the adaptive transparent display system may be implemented as part of a motorcyclist's or pilot's helmet. In this case either two transparent displays or a single wide transparent display covering the entire or a partial part of the view of the driver shall be used. It shall be appreciated, that motorcycles and pilots normally do not have any means of reducing the sunlight except by partially or fully closing the eyelids when driving or flying directly towards bright light such as the sun or by use of sunglasses or a fixed shading provided by the helmet. In some implementations, such a helmet may include metal protection and padding such as a shock absorption padding to reduce the results of an impact. In some implementations, the helmet may include one or more rear facing cameras in order to allow the driver a full 360 degree view of the surroundings including traffic from behind by displaying the rear view on parts of the display in front of the driver's eyes and as part of the helmet. [0070] Typically, the helmet would include an electronic computing unit to process the image data obtained from the image sensor and the at least one eye position sensor, a power supply and a power source such as one or more rechargeable batteries. The eye position sensor would be used to obtain the direction the eyes is facing. Due to the symmetry of the human eye (both facing towards the same direction), a single image sensor may be used to obtain the eye direction for both eyes.

[0071] In some implementations, the second image sensor facing the wearer of the helmet may not be needed and instead the size of the darkened areas may be increased to provide attenuation of bright light viewed by the eyes regardless of the location or position of the eyes.

[0072] Figure 2 shows an active helmet for motorcyclists, pilots and the like that includes a traditional protective enclosure, a transparent display, the at least one camera and a signal processing unit. The active helmet will allow selective darkening in the direction towards bright objects. This will solve the classic problem for motorcycle drivers facing the sun when located near the horizon e.g. when driving at sunrise or sunset and help pilots flying in the direction towards the sun thereby avoiding blinding the driver or pilot. By including adaptive shadowing, it is possible to obtain an unaltered view of the surroundings while attenuating bright light sources.

[0073] Referring now to Figure 2, a block diagram of a transparent adaptive display system 200 (hereinafter system 200) implemented in an active helmet is shown. The head 215 of the wearer of the helmet is not part of the present invention, neither are the eyes 209-1 and 209-2 of the wearer nor the bright object 217. These objects have been included in Figure 2 in order to better describe the functionality of the present invention. The system 200 comprises the following parts: a protective outer casing 221 , shock absorbent internal padding 202-1 and 202-2, an opening and possible environmental transparent protection window 204, a transparent display 201 to provide adaptive shadowing as shown by darkened areas 203-1 and 203-2 (producing darkened fields 207-1 and 207-2) and cameras 205-1 , 205-2 providing forward looking and rear looking views. In some implementations, one camera 205-2 may monitor one eye 209-1 while another camera (not shown) will monitor 209-2. In other implementations, a single camera will monitor both eyes and in yet other implementations, a single camera 205-2 will monitor a single eye 209-1 while the position of the other eye 209-2 will be estimated from the direction of the line of sight of the first eye 209- 1 , because the human eyes under normal viewing conditions will be focusing towards the same object in order to obtain proper focusing. The field of view of the cameras 205-1 and 205-2 is shown in Figure 2 as 21 1 -1 and 21 1 -2. It is important that the field of view 21 1 -1 of the external camera covers the entire field of view of the wearer of the helmet of the system 200 or at least the area intended to be covered by the transparent display 201. It is important that the field of view 21 1 -2 of the camera 205-2 or two rear facing cameras is able to cover any possible position of one or both eyes 209-1 and 209-1 and thereby provide reliable position information to the signal processing unit 213. Finally, an internal light source 219 may provide visible or invisible light such as infrared light onto the face of the wearer of the helmet thereby providing a better ability to determine the position of the eye(s) even under low external light conditions.

[0074] A signal processing unit 213 receives the images from cameras 205-1 and 205-2 and based on the brightness of the forward looking view 21 1 -1 and the position of the eyes 209-1 and 209-2 provides an adaptive shading of the transparent display 201. A power source (not shown) provides energy to the circuits employed in the system 200. This will often be composed of one or more rechargeable batteries and power conditioning circuits but may be implemented using other methods.

[0075] Figure 3 shows an active helmet with a separate display allocated to each eye or a combined display covering both eyes, the at least one image sensor having a view in front of the wearer of the helmet and a sensor to detect the position of one or more eyes.

Furthermore, the helmet may include one or more cameras facing backwards from the helmet thereby enabling the wearer a 360 degree view of the surroundings by projecting the rear scenery onto the display in front of the wearer, possibly compressing the backwards view to a lower or side portion of the display(s).

[0076] Referring now to Figure 3, a block diagram of a transparent active display system 300 (hereinafter referred to as system 300) implemented in an active helmet with a separate display allocated to each eye or a combined display covering both eyes is shown. System 300 is similar to system 200, with similar elements having similar numbers, however in a "300" series rather than a "200" series, except when otherwise indicated; this convention will be used throughout the present specification. The system 300 comprises an advanced motorcyclist's or pilot's helmet that provides additional or different features as compared to system 200. Some of the features shown on Figure 3 may be omitted and others may be added, but this system will in any case provide the at least one additional feature not included in the description of system 200. The head 315 of the wearer of the helmet, the ears 323-1 and 323-2 of the wearer, the eyes 309-1 and 309-2 of the wearer and the bright object 317 are not part of the invention. These objects have been included in the figure and the description to better explain the functionality of the system 300.

[0077] The additional features are as follows: an audio system at least comprised of one or more speakers 319-1 and 319-2, the inclusion of an audio control or entertainment system 327, the inclusion of a wireless communication unit such as a mobile phone unit or a two-way radio 329, the inclusion of one or more microphones 321 -1 , 321 -2, 321 -3 and 321 -4 for the purpose of providing feedback of the auditory scenery of the environment and possibly include noise reduction implemented using entertainment system 327 and provide voice feedback from the wearer of the helmet. By providing automatic noise cancellation of the environment the user may experience a better driver or flying experience due to the decreased noise level. The helmet may include microphone(s) for pickup of the wearers voice close to the mouth (not shown).

[0078] The helmet may include features such as shock padding (not shown but similar to 202-1 and 202-2 on Figure 2) while features 301 -313 are similar to features 201 -213 as shown in Figure 2 and included in the description of system 200, with similar elements having similar numbers, however in a "300" series rather than a "200" series; this convention will be used throughout the present specification. In some systems one or more of these features may be omitted.

[0079] Optionally the system may include a rear facing camera 305-3 that enables a rear view of the scenery and projects this image onto the transparent display 301 either superimposed on the front view or placed to the sides, at the top, bottom or middle of the display as to disturb the main view as little as possible.

[0080] In some implementations, the rear facing camera 305-3 may be placed outside the helmet and in a different position to avoid any structures pertaining to the physical nature of the vehicle, vessel, airplane or helicopter that could obstruct the view. The connection between a externally placed camera and the helmet may be composed of a wired or a wireless connection.

[0081] In some implementations, the rear placed camera 305-3 (providing field of view 31 1 -3) may be used to issue warnings either optically using messages or symbols placed on the display 301 or using audible means such as warning messages or generation of specific sequences such as tones to warn against specific events such as the fast approach of a driver from behind. In some implementations, approaching objects may be detected using a distance sensor (not shown), such as, for example, a radar detector included in the helmet or connected to the helmet.

[0082] The rear facing microphone 321 -3 may be used as a noise reduction microphone and as a pickup device for the auditory environment. If this audio information is provided to speakers 319-1 and 319-2 this may enable the wearer of the helmet an early warning of any vehicles approaching from behind. In some implementations, the direct audio signal may be used, in other cases the audio signal may be filtered and compressed in one or more frequency bands in order to optimize the audibility of the signal. Optionally, the rear facing microphone may also be used as part of an ANC system (active noise cancellation system).

[0083] Figure 4 shows a pair of adaptive glasses intended to be used to decrease the brightness of bright objects such as the sun, approaching headlights or other blinding objects.

[0084] Referring now to Figure 4, a block diagram of a transparent active display system 400 (hereinafter system 400) implemented in a pair of adaptive sunglasses is shown. The system 400 comprises a transparent display 401 that allows selective shading of portions of the display in order to provide a different attenuation towards bright objects such as light source 417 shown in Figure 4. The head 402 of the person wearing the glasses, the ears 423-1 and 423-2 of the person wearing the glasses, the eyes 409-1 and 409-2 of the person wearing the glasses and the bright object 417 are not part of the invention. These objects have been included in the figure and the description to better explain the functionality of the system 400. System 400 may also optionally include a light source 419, such as, for example, an infrared light source, to provide illumination of the eye 409-1 of the driver.

[0085] Figure 4 shows darkened areas 403-1 and 403-2 on the display providing shadows 407-1 and 407-2 thereby significantly attenuating or changing the amount of light from light source 417 being transmitted to the eyes 409-1 and 409-2. Figure 4 shows one forward looking camera 405-1 with view 41 1 -1 and one rear facing camera 405-2 with view 41 1 -2. In some implementations, two rear facing cameras may be used to provide an independent view of the eyes 409-1 and 409-2. In other implementations, a single camera may provide a view of eye 409-1 and the direction of 409-2 shall be estimated from the direction of 409-1. In other implementations, a single camera may provide a view 41 1 -1 of both eyes 409-1 and 409-2 and thereby provide the direction both eyes are facing or the forward and rear facing cameras 405-1 and 405-2 may be combined into a single sensor unit using optical means such as lenses, mirrors and the like.

[0086] In other implementations, there may be no rear facing camera 405-2 and instead, the size of the one or more darkened areas on the transparent display device may be increased to avoid the eyes to be blinded regardless of the position or location of the eyes.

[0087] The signal processing unit 413 obtains information from one or more cameras such as 405-1 and 405-2 and provides shadowing information (display information) to the transparent display 401. In some implementations, it is possible to provide some shadowing to all parts of the display 401 and a different and additional attenuation in some directions depending on the environmental light conditions, providing an optical attenuation that depends on the brightness and path of light passing towards the eyes 409-1 and 409-2. This may be seen as an extension to the well known polaroid glasses that responds to changes in light conditions, but does so evenly over the entire field of view.

[0088] The glasses may further include temples 425-1 and 425-2 to be placed on the wearers ears 423-1 and 423-2 to hold the glasses. Similarly, the glasses may be shaped as to provide a support on the nose in order to make the system comfortable when wearing. A power supply unit, such as a rechargeable battery and power conditioning unit (not shown) provides power to the system 400 and components of the system 400.

[0089] In some implementations, the glasses may include optical correction using lenses. The optical lenses may be included as part of the optical system or part of a display with a controlled refractive index that may be used to provide optical correction for the user with a skewed eyesight (e.g. long or shorts sighted) and thereby improve the eye sight. In some implementations, active glasses with a controlled refractive index may be combined with the adaptive transparent shadowing display 401. In some implementations, changes in the index of refraction by the LCD display itself may be used to provide this functionality (change in the index of refraction) in addition to provide the darkening of certain areas of the transparent screen.

[0090] By using information from both eyes using one or more rear or front facing cameras, it is possible to estimate the objects the eyes are facing. This may be used to provide a variable index of refraction in the display that varies with the distance to the object in focus and thereby solves the problem of eyes getting stiffer with age. This may provide a combination of short and long distance focusing over the entire field of view without any compromises as compared to the current situation where one field of view may be mainly suited for short distance viewing and one field of view mainly suited for long distance viewing. The focusing shall be accomplished by changing the index of refraction of the display electronically, varying this over the field of view.

[0091] In some implementations, the glasses may display additional information such as text messages, browser information, pictures or other graphics information. This information may be provided by an external wireless unit or by wired means (not shown). The wireless or wired connection may be connected to the internet or to a database. In some

implementations, the glasses may include an audio system similar to audio system in system 300.

[0092] Figure 5 shows a rear-view mirror that provides a rear view of the scenery behind the driver. The back mirror includes a transparent display with adaptive shadowing.

Traditional back mirrors have a day and night setting and thereby provides the same attenuation over the entire field of view in the night setting. The adaptive back mirror provides a higher attenuation in directions towards bright objects thereby simultaneously providing attenuation in the desired direction such as bright headlights and enables an unaltered view of the rest of the surroundings which is important at night with the associated lower visibility.

[0093] Referring now to Figure 5, a block diagram of a transparent active display system 500 (hereinafter referred to as system 500) implemented in a rear-view mirror of a vehicle is shown. The head of the user 515, the eyes 509-1 and 509-2 of the user 515 and the bright object 517 are not part of the present invention. These objects have been included in the drawing to explain the system 500 better. The system 500 comprises the following

components: a front shield 502 provides protection against the environment and various weather conditions, a combined mirror and transparent display 501 provides a rear view. In some implementations, the combined mirror and transparent display 501 may include information from a rear camera (not shown) or distance sensors (not shown), such as, for example sensors that determine distance to objects based on radar principles e.g. to be used during parking maneuvers and reverse driving. The combined mirror and transparent display 501 provides attenuation indicated by regions 503-1 and 503-2 in the direction of bright objects, such as light source 517. Shown in Figure 5 are shadows 507-1 and 507-2 that appear as a result of the darkening of the display is regions 503-1 and 503-2.

[0094] Notice, that due to the offset between eyes 509-1 and 509-2, two darkening spots (e.g. regions 503-1 and 503-2 are required). However, the human brain will merge the images from the two eyes. Therefore, the dark spot intended for the opposite eye (region 503-1 for eye 509-1 and region 503-2 for eye 509-2) will appear as ghost spots (i.e. transparent but still visible), thereby attenuating the scenery somewhat in these regions.

[0095] In one implementation, the user may optionally wear a pair of 3D glasses (not shown), that is synchronized by the processing unit 513 to remove these ghost images. The ghost images will be removed by first providing image information to one eye, during which the image is shut off from the other eye using 3D shutter controlled glasses. The

synchronization between the shutter action of the 3D glasses and the transparent display will happen using wired or a wireless connection e.g. such as infrared light or a radio signal.

[0096] The mirror included in the combined mirror and transparent display 501 provides a reflection of the scenery behind the user 515 while the transparent display of the combined mirror and transparent display 501 performs the darkening of bright objects. A backward looking camera 505-2 (having field of view 51 1 -2) provides information about the position of the head of the user 515 and eyes 509-1 , 509-2 of the user 515. The backward looking camera 505-2 may also be used to detect bright light sources such as light source 517 or may be supplemented by one or more additional cameras (not shown) to provide distance information and the like. In some implementations, the rear facing mirror may include the display of additional information from a rear facing camera thereby providing aiding in parking in tight spots or provide warning signals or other relevant information.

[0097] A signal processing unit 513 captures the image information from camera 505-2 and based on this updates the transparent display 501 to provide a higher attenuation based on the position of the eyes 509-1 and 509-2 and the position and brightness of the light source 517. In practice, multiple bright objects may be encountered in which case the combined mirror and transparent display 501 provides proper shadowing in the proper regions in order to provide selective shadowing of the scenery.

[0098] The system 500 may also optionally include an additional light source 519, such as, for example, an infrared light source, to provide illumination in region 521.

[0099] Figure 6 shows side mirrors with adaptive shading. Normally side mirrors do not include any adjustment of the transparency. The adaptive shading may either include a single setting for the entire transparent mirror or be able to selectively darken portions of a transparent display placed in front of the mirror. This allows an unaltered view of the surroundings including dark objects while attenuating bright objects. This is a particular advantage when vehicles are passing by at night because their headlights will then not temporarily blind the driver from the reflection in the side mirror.

[00100] Referring now to Figure 6, a block diagram of a transparent active display system 600 (hereinafter referred to as system 600) implemented in a side mirror of a vehicle is shown. The head 615 of the user, the eyes 609-1 and 609-2 of the user and the bright object 617 are not part of the invention. These objects have been included in Figure 6 to better explain the functionality of the system 600. The system 600 comprises a side mirror 604 along with a transparent display 601. Darkened areas 603-1 and 603-2 on the display 601 provides attenuation in the path of bright object 617 towards the driver's eyes 609-1 and 609- 2, leaving shadows 607-1 and 607-2. The system 600 includes the at least one image sensor 605 providing field of view 61 1 -1 to determine the position of the eyes 609-1 and 609-2 while also tracking the position of bright objects, such as light source 617. In some

implementations, the system 600 may be supplemented by additional cameras (not shown). A light source, such as an infrared light emitting diode 619, may be used to provide

illumination of the user's head 615 and eyes 609-1 and 609-2.

[00101] In some implementations, the user may adjust the position of the dark spots thereby obliviate the need to track the position of eyes 609-1 and 609-2 but requiring the user to maintain a relatively stable position within the vehicle.

[00102] A signal processing unit 613 obtains information from image sensor 605 and provides adaptive shading information to the transparent display 601 based on the position of the eyes 609-1 and 609-2 and of any bright objects such as 617. [00103] A front shield 602 provides protection against the environment and various weather conditions.

[00104] In some implementations, the side mirror 604 include a heater that provides heating of the side mirror 604 and transparent display 601. Typically, this heater will be activated under low temperature conditions in order to save power. The heater may be included to avoid the build up of snow and ice on the mirror and display under certain weather conditions.

[00105] The system 600 is powered by a power source e.g. a rechargeable battery and power condition circuits, not shown on Figure 6.

[00106] Figure 7 shows a system 700 that includes adaptive displays mounted inside a car in order to replace the side mirrors and the central mirror with displays thereby avoiding problems with direct reflections from bright objects. The advantage of replacing the side mirrors with integrated displays and cameras is that there is no risk of damaging the side mirrors during driving and when placed in a parked position and that the adaptive lightening control can be integrated into the vision system thereby enabling a full 360 degree view of the surroundings to the driver instead of a limited view as determined by the exact positioning of the mirrors and any obstacles as given by the structure of the vehicle itself and any objects located inside the vehicle.

[00107] Referring now to Figure 7, a block diagram of the system 700 is shown. System 700 comprises one or more displays 701 -1 , 701 -2 and 701 -3 and two or more cameras 705- 1 , 705-2, 705-3, 705-4 and 705-5 (having respective fields of view 71 1 -1 , 71 1 -2, 71 1 -3, 71 1 -4 and 71 1 -5) that provides image information of the scenery around the car that is not directly visible from the driver's position. The eyes 709-1 and 709-2 of the driver are not part of the present invention but have been included in Figure 7 for a better understanding of the principles involved. A signal processing unit 713 processes the image information from two or more cameras 705-1 , 705-2, 705-3, 705-4 and 705-5 and displays this processed image information onto a transparent or non-transparent display. In some cases, the display 701 -2 may be used both as a display of rear scenery and include active shadowing of bright object in front of the driver. This may require front facing and rear facing cameras similar to system 300. The information may be displayed on a single display or placed on multiple displays in front of the driver as shown in Figure 7. In some implementations, the rear information may be projected onto the front screen 702. The advantage of this system is the lack of side mirrors thereby eliminating the risk of collisions with these and lowering the air drag of the vehicle. Furthermore, the side cameras 705-1 , 705-2, 705-3 and 705-4 may provide a wider weaving angle as to what is possible using a single side mirror only thereby providing a wider field of view for a safer driving experience. They may be combined with an optical processing unit providing warnings if the driver is about to perform a dangerous maneuver such as entering the region of another driver.

[00108] Figure 8 shows an adaptive display system according to an example

implementation, as seen from inside the car, covering the entire front window and allowing the transparency of the windows to be changed including adding a fixed attenuation to all window sections and attenuate some sections more, either adaptively based on the

surroundings or based on user preferences. Furthermore, the display may include a representation of relevant driver information such as speed, oil level and GPS directional information.

[00109] Referring now to Figure 8, a block diagram of the adaptive display system 800 is shown. The eyes 809-1 and 809-2 of the user and the light source 817 are not part of the present invention but included in Figure 8 for illustrative purposes. System 800 comprises a transparent display that is able to attenuate bright light received from light sources similar to system 100 and includes the display of additional information such as driving speed, oil status 827-1 and directional information such as GPS guidance 827-2 and 827-3. This information may be projected onto the front screen 802 or be included as graphics on the transparent display. In some implementations, the side mirrors may include adaptive shading similar to system 600 or be fully integrated using cameras and displays similar to system 700. The central mirror 801 may either be implemented as a transparent adaptive mirror similar to system 500 or it may be displaying the rear scenery as the display from a rear camera similar to system 700. A signal processing unit 813 processes the image information from the cameras and provide image information to the displays and/or transparent displays and mirrors. The numbering and function of the individual system components 803, 805, 807, 809, 81 1 , 819 are similar to the components 103, 105, 107, 109, 1 1 1 , 1 19 of the system 100 with a change in only the first digit; for example, a camera 805 is similar to a camera 105. [00110] In some implementations, the central mirror 801 (e.g. rear display) may be implemented as an electronic display and without the use of a mirror merely relying on the display itself to show the scenery similar to system 700. One or more cameras will be used to pick up the scenery. In some implementations, the displays 801 -3 and 801 -4 have a flat surface and in other implementations, as shown in Figure 8, have a curved surface.

[00111] Figure 9 shows another implementation of an adaptive display system, where a transparent display is placed in front of the image sensor elements, thereby enabling an increased dynamic range of the image sensor by adaptively providing variable shadowing in bright portions of the image sensor matrix.

[00112] Referring now to Figure 9, a block diagram of another implementation adaptive display system 900 is shown. The system 900 includes an image sensor 905 with a transparent display 901 in front of the light sensitive elements composing the image array, and a signal processing unit 913. In some implementations, the transparent display 901 will use as many pixels as the image array matrix composing the camera (same number of columns and rows). In other implementations, the display may use a different number of pixels than the image sensor, typically lower than the number of image sensors. By making the transparent display pixels in front of the image sense pixels darker when more light is received, it is possible to increase the dynamic range of the camera without increasing the size of the image elements and without increasing the noise floor of the dark pixels. In practice, the SNR (signal-to-noise-ratio) will then be more constant across the camera pixel elements because the brightest areas will receive less light and thereby allow more light to be processed by the underlying elements without saturation. This enables a relatively constant SNR across the image elements irrespectively of the illumination level, which compared to current solutions is a unique solution. This enables the use of digitization of the pixels using a lower number of bits (lower dynamic range required) thereby significantly lowering the power consumption and silicon area required of the A/D converter. This may be used to obtain images from CMOS, CCD and similar image sensors that obtain a dynamic range that may equal or even exceed the best available photographic films. The situation for image sensors is currently being characterized by very bright sections of images becoming saturated (regions become completely white under a high intensity of light) due to the limited dynamic range of current image sensors and thereby losing image information, while photographic films may still have some dynamic range left even when stressed by bright light conditions.

[00113] In some implementations, the adaptive display system of the present invention may include an image sensor with multiple sensitivity modes using shorter and longer exposure modes or include LCD attenuation of the entire view for an increased dynamic range or switch between multiple attenuation values to get multiple images, obtained at higher and lower light sensitivity.

[00114] Figure 10 shows an example of a transparent adaptive display system providing adaptive shading very similar to Figure 1 , with the difference that the driver also wears a set of 3D shutter glasses (similar to the 3D glasses that are used to watch 3D movies using television displays) that provide alternating views for the left and right eye and simultaneously provide shadowing for the opposite eye as the eye that is viewing the right or left portion of the 3D image. In this implementation, the 3D glasses will provide shadowing to avoid the user to see the darkened portion of the image belonging to the opposite eye, thereby removing the ghost shadows that are part of the view as seen by the user in Figure 1. The system provides synchronization between the transparent display and the 3D glasses i.e. when the 3D glasses turn off the view from the right eye, the transparent display provides the shadowing for the left eye and vice-versa.

[00115] Referring now to Figure 10, shows a schematic representation of an example of a transparent adaptive display system implemented in an adaptive frontshield 1000 that provides improved viewing conditions for a driver, pilot or the like when viewing a bright object 1017. This transparent adaptive display system includes the following components: a transparent display (e.g. an LCD display) 1001 , adjacent a frontscreen 1002, that is able to darken or partially darken areas 1003-1 , 1003-2 (producing darkened fields 1007-1 and 1007- 2) based on information obtained from front facing camera 1005-1 that is used to obtain a measurement of the environmental light conditions and rear facing camera 1005-2 that is used to obtain a view of the driver or pilot in order to estimate the position of the eyes 1009-1 and 1009-2. The driver will wear a pair of 3D glasses (shutter based) 1015 to avoid any problems with ghost images caused by the darkened areas that are facing the other eyes. In other words, the transparent display 1001 shown in Figure 10 will be showing darkened images corresponding to the right and left eye accordingly as calculated by the image processing unit 1013, while the shutter based 3D glasses will be synchronized to the transparent display using radio or light signals, thereby providing a different darkening for the left and right eye corresponding to the sight of view to bright objects. Otherwise, the transparent adaptive display system shown in Figure 10 is similar to the transparent adaptive display system 100 shown in Figure 1 , and can include a light source 1019 for illuminating a region 1021 near the driver.

[00116] In some implementations, the glasses may emit or receive signals to indicate the location or position of the glasses.

[00117] Referring now to Figure 1 1 , a method is shown for providing adaptive eye shadowing for increased eye sight. The method includes the following blocks: at block 1 101 , capture an image of the current scenery, at block 1 102, capture the position of one or more eyes, and finally at block 1 103, provide adaptive shading in the path of light so that bright light received from light sources are attenuated differently than darker portions of the scenery.

[00118] Referring now to Figure 12, a method is shown for providing adaptive eye shadowing for increased eye sight using a manual adjustment of dark spots. The method includes the following blocks: at block 1201 , capture an image of the current scenery, at block 1202 select the position of the eyes by manual adjustment, and finally at block 1203, provide adaptive shading in the path of light so that bright light received from light sources is attenuated differently than darker portions of the scenery. In this method, the position of the eyes is required to be relatively stable in order to provide proper shading of bright light received from one or more light sources.

[00119] Referring now to Figure 13, a method is shown for providing adaptive eye shadowing for increased dynamic range. The method includes the following blocks: at block 1301 , capture an image of the current scenery, at block 1302, use the image sensor information to provide pixel shadowing information for a transparent display located in front of the image sensor, and finally at block 1303, provide adaptive shadowing of the image sensor pixels based on the brightness of the received light in different regions.

[00120] Referring now to Figure 14, a method is shown for providing adaptive eye shadowing that enables removal of ghost images associated with an adaptive shading display. The method includes the following blocks: at block 1401 , capture an image of the current scenery, at block 1402, capture the position of one or more eyes of an observer, at block 1403, provide adaptive shading in the path of light toward a left eye of the observer based on image brightness information and the eye position of the left eye, and at block 1404, provide adaptive shading in the path of light toward a right eye of the observer based on image brightness information and the eye position of the right eye so that bright light received from light sources are attenuated more than darker portions of the scenery and use a pair of 3D glasses to swap between right and lefts views. This means the transparent adaptive display would alternate between providing shading for the left and right views while at the same time, the glasses would shut off view from the opposite side. As an example, when the transparent display provides shading for the view obtained by the right eye, the shutter in front of the left eye would be dark and vice-versa. This way, a 3D shading can be obtained without ghost images and without needing a high resolution display attached to the glasses. The 3D glasses would receive a signal (light, infrared or radio signal such as Bluetooth™ or the like) or emit a signal (light, infrared or radio signal such as Bluetooth™ or the like) to provide synchronization between the adaptive shading system and the 3D glasses. This system has the advantage, that it will provide shading for both eyes without any ghost images while at the same time having very low power consumption for the glasses, because the glasses would be of simpler construction and only require a screen to be turned on or off and not providing a complete scenery.

[00121] In one implementation, the adaptive shading system would include a scenery predictor, thereby predicting the imagery ahead of time to avoid any lag between the captured images, the required processing time and the presented adaptive shading. By calculating the movement of bright objects, it is possible to predict the position of these bright objects at a later time and thereby eliminating the lag associated with the processing in the system. As an example, if the position of an object at time To is Po=(Xo, Yo, Zo) and the position at time

Το+ΔΤ is Pi=(Xi , Yi , Zi), then one can predict the position at time Το+2*ΔΤ from the velocity V to be PI+VAT=PI+(PI-PO)=2 * PI-PO=(2XI-XO,2YI-XO,2ZI-ZO). This calculation assumed the acceleration of the object to be zero. If more images are captured, the path of the objects involved in the scenery can be more accurately be predicted by using Newtons laws of motion, e.g. taking into account acceleration and changes in acceleration.

[00122] In one implementation, the adaptive shading system includes the at least two cameras for the capture of the scenery and uses the cameras for calculating the optimal attenuation pattern for the transparent screen and to estimate the distance to vehicles and obstacles on a road. This way, the same cameras may be used for providing a safety mechanism for slowing down or stopping the car in case of obstacles on the road while at the same time providing brightness information to the information processing unit and possibly utilizing the distance information to objects (e.g. light sources emitting and/or reflecting bright light) to provide a more accurate estimation of the current and better prediction of the future position of objects to be shaded on the transparent display. This may be used to provide visual clues on the display to warn the driver of obstacles ahead or reduce the speed accordingly. In some implementation, the braking action or reduction of throttle may be performed automatically by the system.

[00123] Referring now to Figure 15, a method of determining a location or position of a point PD (indicated by point 1503) on the transparent display is shown. The point PD

represents the intersection between the line of sight of object (e.g. bright light source) Pi (indicated by point 1517) and observer Po (indicated by point 1509). Assuming Po has coordinates (xo,yo,zo) and Pi has coordinates (xi,yi,zi), then the line passing through both these points can be described using a parameter equation Px=(x,y,z)=(x0,y0,z0)+t(x1 -x0,y1 - y0,z1 -z0)=(x0,y0,z0)+t(Ax, Ay,Az), where t is a real value and has the value zero when passing Po and one when passing Pi. The intersection with the display 1501 is determined as shown in Figure 15 located in the xz-plane (y=0). Thus, by inserting in the equation above and solving for t: y0+ 1 Ay = 0 results in t=-y0/Ay. By inserting this value in the equation for the line, the coordinates of PD=(XD,0,ZD) are obtained, from which: XD=xo-yo Ax/Ay and ZD=ZO- yo Δζ/Ay is determined. The coordinates of Po and Pi may not be known, but approximate values may be obtained by use of image sensors. Figure 16 shows a figure in order to obtain the coordinates of Pi, (indicated by point 1617) and a similar process may be performed for obtaining Po. The coordinates of Pi are determined from spherical geometry as follows:

Similarly, the coordinates of Po are determined follows: Po=(xo,yo,zo), where

where xo, yo, zo, θο, φο and lo are defined in a similar fashion to point Pi on Figure 16, but using point Po as reference. By combining these sets of equations, the position of PD is determined in spherical coordinates: XD=xo-yo Δχ/ Ay=locos0ocoS(t>o-locc>seosin(t>o(Ax/ Ay), where

(Δx/ Δy)=(llCOSθlCOSφl-loCOSθoCOSφo)/(llCOSθlCOSφl-loCOSθ oCOSφo), for h»lo (e.g. the position of the sun is much further away than the distance between the transparent display 1501 and point 1509, this reduces to:

XD≡ Ιοοοεθοοοεφο-Ιοοοεθοείηφοθοίφ^ Ιοοοεθο οοεφο-είηφοθοίφ-ι). Similarly, we can find Ιοείηθο- ΙοθοεθοείηφοίΔζ/ Ay), where

(Az/ Ay)= ( hsinGi- Ιοείηθο ιθοεθιθοεφι-Ιοθοεθοθοεφο).

[00124] Similarly, ZD≡ Ιο5ίηθο-Ιοθθ5θο5ίηφο(1 -οοίφι)= Ιο^ίηθο-οοεθοείηφοίθηθο/είηφ-ι). The angles { θ 0 , φο, θι, φι} may be obtained directly from the image sensors while h may often be assumed to be infinity or two image sensors may be used to obtain this value as shown in Figure 17. The distance between the image sensors and the eye of the observer, lo, can be found as follows (referring to Figure 17, the same method may be used to find h , the distance between the image sensor and the bright object): φ3=180-φι-φ2, sin φι/Α and where d is the distance between two image sensors (indicated by points 1705-2a, 1705-2b) and the eye of the observer (indicated by point 1709).

[00125] If only one image sensor is present, the distance lo can be found by setting the distance d between the two eye positions to known value - this is always a constant for any given user, e.g. 56mm distance between the center of the eyes. From this the distance lo can be found as follows (referring to Figure 18, where the eyes of the observer are indicated by points 1809-1 , 1809-2 and the image sensor is indicated by point 1805-2): φι=90-φ3Α, φ2=90- φ3Β and φ3Β) and therefore:

[00126] lo = Βοο5φ3 In this calculation, the distance to the user is based on two-dimensional data. It is straightforward to extend the distance calculation to be based on three-dimensional information for increased accuracy using similar geometry as shown in Figures 15 and 16.

[00127] The calculation of the trigonometric function values may be done using table lookup, or by various approximations such as Taylor series or use of the CORDIC algorithm.

[00128] In some implementations, the adaptive shielding may be obtained by changing the refractive index of various liquids by application of an appropriate electric field, thereby changing the transparency of the display. [00129] Referring to Figure 19, an example implementation of an adaptive transparent display system 10 (hereinafter referred to as system 10) for adaptively shielding or shadowing one or more bright objects in a field of view seen by an observer is shown. System 10 is similar to system 100, with similar elements having similar numbers, however in a "10" series rather than a "100" series. The system 10 includes a transparent display device 1 1 , a memory 12, a processor 13, a first image sensor 15-1 , and a second image sensor 15-2. The processor 13 controls the overall operation of system 10. The processor 13 is coupled to and interacts with the transparent display device 1 1 , the first image sensor 15-1 , the second image sensor 15-2, and the memory 12. The processor 13 may be implemented as a plurality of processors, and/or as one or more DSPs (Digital Signal Processors) including but not limited to one or more central processors (CPUs).

[00130] The transparent display device 1 1 comprises any suitable one of, or combination of, transparent flat panel display devices (e.g. LCD (liquid crystal display devices), transparent OLED (organic light emitting diode) display devices, and the like). In some embodiments, the transparent display device 1 1 includes one transparent display screen (not shown) controlled by a transparent display device controller (not shown) that interacts with the processor 13. The transparent display controller (not shown) controls the individual pixels of the transparent display screen (not shown) to display images on the transparent display screen (not shown). The transparent display device controller (not shown) also controls the transparent display screen (not shown) to adjust a transparency of one or more areas on the transparent display screen to block light from passing through the one or more areas as described in further detail below. In some embodiments, the transparent display device 1 1 includes multiple transparent display screens (not shown) each controlled by a transparent display device controller (not shown) that interacts with the processor 13.

[00131] The processor 13 is also configured to interact with the first image sensor 15-1. The first image sensor 15-1 is positioned to face a scene in front of the transparent display device 1 1 that includes one or more bright light sources and/or objects. The first image sensor 15-1 is configured to measure an intensity of light in the scene (e.g. the field of view of the first image sensor 15-1 ), generate light intensity data indicative of brightness of each of the one or more bright light sources, and transmit the light intensity data to the processor 13. The first image sensor 15-1 is any suitable sensor that is capable of measuring or determining the intensity of light in a scene or field of view of the sensor, such as, for example, a CCD imaging array, an optical imaging array, a polymer based light sensing array, and the like.

[00132] The processor 13 is configured to interact with the second image sensor 15-2. The second image sensor 15-2 is positioned to face an observer located in line of sight of the transparent display device 1 1 , and can face in a direction opposite first image sensor 15-2. The second image sensor 15-2 is configured to obtain an image of the observer, generate image data representative of the image of the observer, and transmit the image data to the processor 13. The second image sensor 15-2 is any suitable sensor that can capture an image and generate image data representative of the image, such as for example, a CCD imaging array, an optical imaging array, and the like.

[00133] The processor 13 is also configured to communicate with memory 12 comprising a non-volatile storage unit (e.g. Erasable Electronic Programmable Read Only Memory

"EEPROM", Flash Memory) and a volatile storage unit (e.g. random access memory "RAM"). Programming instructions that implement the functional teachings of the system 10 as described herein are typically maintained, persistently, in memory 12 and used by processor 13 which makes appropriate utilization of volatile storage during the execution of such programming instructions. Those skilled in the art will now recognize that memory is an example of computer readable media that can store programming instructions executable on processor 13. Furthermore, memory 12 is also an example of a memory unit and/or memory device.

[00134] The memory 12 stores a software program or application 14 that control basic operations of the system 10. The software program or application 14 is normally installed on the system 10 at manufacture and are typically stored in memory 12. The software program or application 14 is executed by the processor 13. Those skilled in the art will appreciate that portions of software program or application 14, may be temporarily loaded into volatile storage unit of memory 12.

[00135] The processor 13 also interacts with the memory 12 to store the light intensity data received from the first image sensor 15-1 and the image data received from the second image sensor 15-2.

[00136] The processor 13 is further configured to interact with a power supply 16. The power supply 16 powers the components of the system 10 including, the transparent display device 1 1 , the first image sensor 15-1 , the second image sensor 15-2, the memory 12, and the processor 13. In some embodiments, the power supply 16 includes, a battery, a power pack, micro fuel cells and the like; however, in other embodiments, power supply 16 includes a port (not shown) to an external power supply and a power adaptor (not shown), such as an alternating current to direct current (AC-to-DC) adaptor, that provides power to the

components of system 10.

[00137] In some implementations, the processor 13 is configured to interact with a controller of a vehicle (not shown) via an auxiliary I/O port 18 of the system 100 to receive information from the controller of the vehicle (not shown). In this implementation, the processor 13 is configured to control the transparent display device 1 1 to render the information received from the controller of the vehicle (not shown) on a transparent screen (not shown) of the transparent display device 1 1. The information received from the controller of the vehicle (not shown) includes one or more of: a speed of the vehicle; navigation information; an oil level of the vehicle and engine temperature of the vehicle.

[00138] In some implementations, the processor 13 is further configured to interact with a camera 19 that is positioned such that a field of view of the camera 19 faces a scene behind an observer. In this implementation, the camera 19 is configured to obtain an image of a scene behind the observer and generate image data representative of the image behind the observer. The processor 13 is configured to receive the image data representative of the image behind the observer and control the transparent display device 1 1 to render the image behind the observer on the transparent display device 1 1.

[00139] Attention is now directed to Figure 20, which depicts a flowchart of a method 20 for adaptively shadowing one or more bright light sources in a field of view seen by an observer, according to an example implementation. The method 20 is carried out by software executed, for example, by the processor 13 of the system 10. Coding of software for carrying out the method 20 is within the scope of a person of ordinary skill in the art given the present disclosure. In some embodiments, computer-readable code executable by at the processor 13 of the system 10 to perform method 20 is stored in a computer-readable storage medium, device, or apparatus, such as a non-transitory computer-readable medium.

[00140] It is to be emphasized, that method 20 need not be performed in the exact sequence as shown, unless otherwise indicated; and likewise various blocks may be performed in parallel rather than in sequence; hence the elements of the method 20 are referred to herein as "blocks" rather than "steps".

[00141] The method 20 begins at block 22. At block 22, the first image sensor 15-1 , which is positioned to face a scene in front of the transparent display device 1 1 , measures an intensity of light in the scene received from a bright light source (e.g. in a field of view of the first image sensor 15-1 ), and generates light intensity data indicative of brightness of the light received from the bright light source. The light intensity data is stored in memory 12. The method then proceeds to block 24.

[00142] At block 24, the second image sensor 15-2, which is positioned to face an observer located in a line of sight of the transparent display device 1 1 , obtains an image of the observer, and generates image data representative of the image of the observer. The image data is stored in memory 12. The method then proceeds to block 26.

[00143] At block 26, the processor 13 accesses the memory 12 to retrieve the stored light intensity data, receives the stored light intensity data from the memory 12, and determines a location or position of the light source relative to the transparent display device based on the light intensity data. The method then proceeds to block 28.

[00144] At block 28, the processor 13 accesses the memory 12 to retrieve the stored image data, receives the stored image data from the memory 12, and determines a location or position of the observer relative to the transparent display device based the image data. In some implementations, the processor 13 determines a location or position of one eye or both eyes of the observer relative to the transparent display device based on the image data. The method then proceeds to block 30.

[00145] At block 30, the processor 13 controls the transparent display device 1 1 based on the location of the observer and the location or position of the light source to adaptively adjust a transparency at an area of the transparent display device 1 1 to at least partially block light received from the light source from passing through the area of the transparent display device 1 1. In some implementations, the processor 13 adjusts a transparency at the area of the transparent display device 1 1 by shadowing or darkening the area of the transparent display device 1 1 to at least partially block (or shield) light received from the light source from passing through the area, thus, attenuating light in the line-of-sight of the observer directed towards the light source. The method ends after block 30. [00146] In some implementations, the system 100 includes an additional light source (not shown) for illuminating the observer. In some implementations, the additional light source is an infrared light source and the second image sensor 15-2 is configured to respond to infrared light.

[00147] In some implementations, the first image sensor 15-1 is further configured to measure an intensity of light in the scene received from a plurality of bright objects, and generate light intensity data indicative of brightness of each of the plurality of bright objects. In this implementation, the light intensity data indicative of brightness of each of the plurality of light sources is stored in memory 12 and the processor 13 is configured to access the light intensity data indicative of brightness of light received from each of the plurality of light sources, determine a location or position of each of the plurality of light sources relative to the transparent display device based on the light intensity data, and control the transparent display device 1 1 based on the location of each light source and the location of the observer to adaptively adjust a transparency at plurality of areas of the transparent display device 1 1 to at least partially block (shield) light received from each light source from passing through each of the plurality of areas of the transparent display device 1 1.

[00148] In some implementations, the processor 13 is further configured to repeatedly receive light intensity data over a time period, predict the location or position of the light source relative to the transparent display device based on the received light intensity data and control the transparent display device based on the location of the observer and the predicted location of the light source to adaptively adjust the transparency at the area of the transparent display device 1 1 to at least partially block light received from the light source from passing through the area of the transparent display device 1 1.

[00149] In some implementations, determining the location or position of the observer relative to the transparent display device 1 1 includes determining spherical coordinates (i.e. vertical and horizontal angles and distance) of the observer relative to the transparent display device 1 1. In some implementations, determining the location or position of the light source relative to the transparent display device 1 1 includes determining spherical coordinates (i.e. vertical and horizontal angles and distance) of the light source relative object relative to the transparent display device 1 1 and a distance from the light source to the transparent display device 1 1. In some implementations, the location or position (e.g. the coordinates) of the observer (e.g. the eyes of the observer) relative to the transparent display device 1 1 , the distance from the observer to the transparent display device 1 1 , the location or position (e.g. the coordinates) of the bright light source relative to the transparent display device 1 1 , and the distance from the bright light source to the transparent display device 1 1 are determined using the methods described in Figures 15-18 described above. Also, in some

implementations, adjusting the transparency of the area (or a point) on the transparent display device 1 1 includes determining a position of the area (or point) on the transparent display device 1 1 to be darkened or shadowed using the methods described in Figures 15-18 above.

[00150] In some implementations, the second image sensor facing the observer and associated positioning sensing may be replaced by simply increasing the size of the one or more darkened areas on the transparent display device thereby ensuring the path between the eyes and any bright object are covered regardless of eye position.

[00151] The above-described embodiments of the invention are intended to be examples of the present invention and alterations and modifications may be effected thereto, by those of skill in the art, without departing from the scope of the invention which is defined solely by the claims appended hereto.