Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
APPARATUS AND METHOD FOR CONTROLLING A VEHICLE DISPLAY
Document Type and Number:
WIPO Patent Application WO/2018/145956
Kind Code:
A1
Abstract:
An apparatus (101), a system, a vehicle (106), a method (1300), a computer program (904) or a non-transitory computer readable medium (906) for transforming images for display on a head-up display (103) are disclosed. The method (1300) comprises obtaining positional data representative of a current position of an eye (104) of a user (105) and determining a transformation in dependence on the positional data. A transformation signal is outputted for applying the transformation to image data representative of an image to generate transformed image data representative of a transformed image to be displayed on the head- up display (103).

Inventors:
HARDY ROBERT (GB)
DIAS EDUARDO (GB)
Application Number:
PCT/EP2018/052293
Publication Date:
August 16, 2018
Filing Date:
January 30, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
JAGUAR LAND ROVER LTD (GB)
International Classes:
G06F3/01
Foreign References:
DE102005037797A12007-02-08
US20160357015A12016-12-08
DE102015109027A12016-12-08
DE102004050064A12006-04-27
Attorney, Agent or Firm:
CHANG, Seon-Hee (GB)
Download PDF:
Claims:
CLAIMS

1 . A method of transforming images for display on a head-up display, the method comprising:

obtaining positional data representative of a current position of an eye of a user within the eye-box;

determining a transformation in dependence on the positional data;

outputting a transformation signal for applying the transformation to image data representative of an image to generate transformed image data representative of a transformed image to be displayed on the head-up display.

2. The method according to claim 1 , comprising comparing the obtained positional data with stored positional data to obtain the transformation. 3. A method according to claim 2, wherein the transformation is stored with corresponding positional data during a calibration process.

4. A method according to any one of claims 1 to 3, wherein the method comprises outputting a positioning signal for causing movement of a moveable element of the head-up display to adjust the position of an eye-box of the head-up display responsive to the current position of the at least one eye of the user.

5. The method according to claim 4, comprising comparing the position of the moveable element with stored positions of the moveable element to obtain the transformation.

6. A method according to any one of claims 1 to 5, wherein the positional data identifies a current two-dimensional position of an eye of the user.

7. A method according to any one of claims 1 to 6, wherein: the output signal is provided to a head-up display having an image display device and optical elements configured to apply a distortion to images displayed on the image display device; and the distortion is an approximation to an inverse transformation of the transformation applied to the image data.

8. A method according to claim 7, wherein the optical elements include a windshield of a vehicle.

9. A method according to any one of claims 1 to 8, wherein the head-up display comprises a moveable element and the method comprises providing a positioning signal that depends on the positional data to the head-up display to cause the head-up display to adjust the position of a moveable element in dependence on the positioning signal.

10. A method according to claim 9, wherein the moveable element comprises a mirror.

1 1 . Apparatus for transforming images for display on a head-up display, the apparatus comprising a control means configured to:

obtain positional data representative of a current position of an eye of a user within the eye-box;

determine a transformation in dependence on the positional data;

output a transformation signal for applying the transformation to image data representative of an image to generate transformed image data representative of a transformed image to be displayed on the head-up display. 12. Apparatus according to claim 1 1 , wherein the control means is configured compare the obtained positional data with stored positional data to obtain the transformation.

13. Apparatus according to claim 1 1 or claim 12, wherein the transformation is an approximation to an inverse transformation of an image distortion caused by the optical components of the head-up display with the current position of the eye of the user relative to the head-up display.

14. Apparatus according to any one of claims 1 1 to 13, wherein the control means is configured to output a positioning signal for causing movement of a moveable element of the head-up display to adjust the position of an eye-box of the head-up display responsive to the current position of the at least one eye of the user.

15. Apparatus according to claim 14, wherein the control means is configured to compare the position of the moveable element with stored positions of the moveable element to obtain the transformation. 16. Apparatus according to any one of claims 1 1 to 15, wherein the apparatus is configured to receive a signal from an imaging means and obtain the positional data from the signal.

17. Apparatus according to any one of claims 1 1 to 16, wherein the positional data identifies a current two-dimensional position of at least one the eye of the user.

18. Apparatus according to any one of claims 1 1 to 17, wherein the control means is configured to determine the positional data by analysing image data to identify a representation of an eye of the user.

19. Apparatus according to any one of claims 1 1 to 18, wherein the control means comprises at an electronic processor and an electronic memory device coupled to the electronic processor and having instructions stored therein. 20. A system comprising the apparatus of any one of claims 1 1 to 19 and a head-up display comprising an image display device, wherein the head-up display is configured to display on the image display device the transformed image.

21 . A system according to claim 20, wherein the head-up display has optical elements configured to apply a distortion to images displayed on the image display device and the distortion approximates to the inverse transformation applied to the image data.

22. A system according to claim 21 , wherein the optical elements include a windshield of a vehicle.

23. A system according to any one of claims 20 to 22, wherein: the head-up display comprises a moveable element; the control means is configured to provide a positioning signal that depends on the positional data; and the position of the moveable element is adjustable in dependence on the positioning signal.

24. A system according to claim 23, wherein the moveable element comprises a mirror.

25. A system according to any one of claims 20 to 24 comprising an imaging means positioned on or within a vehicle to capture an image containing a representation of an eye of the user of the vehicle and processing means configured to process the captured image to generate the positional data.

26. A vehicle comprising the system according to any one of claims 20 to 25.

27. A computer program which when executed by a processor causes the processor to perform the method of any one of claims 14 to 24.

28. A non-transitory computer-readable storage medium having instructions stored therein which when executed on a processor cause the processor to perform the method of any one of claims 1 to 10.

29. An apparatus, a system, a vehicle, a method or a non-transitory computer readable medium as described herein with reference to the accompanying figures.

Description:
APPARATUS AND METHOD FOR CONTROLLING A VEHICLE DISPLAY

TECHNICAL FIELD The present disclosure relates to an apparatus and method for controlling a vehicle display. In particular, but not exclusively it relates to transforming images for display on a head-up display in a vehicle, such as a road vehicle.

Aspects of the invention relate to an apparatus, a system, a vehicle, a method, a computer program and a non-transitory computer readable medium.

BACKGROUND

Head-up displays are provided in some road vehicles, in which an image presented on a display is reflected in the windshield so that the image appears to the driver as a virtual object on the outside of the windshield. A space within the vehicle, from which the whole of the image presented by the head-up display may be viewed, is referred to as the eye-box of the head-up display. If the driver moves the position of their head, and therefore their eyes, the driver will still be able to see the image while their eyes remain within the eye-box. However, windshields typically have a complex curved shape and the light that the driver sees is reflected off a region of the windshield, and at angles, that depend on the position of the driver's eyes. Consequently, the viewed images may be distorted by the reflection off the windshield in ways that depend upon the driver's eye position. Therefore, a problem with existing systems is that as the driver moves their head the image will appear to change shape due to the varying distortion.

It is an aim of the present invention to address this disadvantage of the prior art. SUMMARY OF THE INVENTION Aspects and embodiments of the invention provide an apparatus, a system, a vehicle, a method, a computer program and a non-transitory computer readable medium as claimed in the appended claims. According to an aspect of the invention there is provided a method of transforming images for display on a head-up display, the method comprising: obtaining positional data representative of a current position of an eye of a user; determining a transformation in dependence on the positional data; outputting a transformation signal for applying the transformation to image data representative of an image to generate transformed image data representative of a transformed image to be displayed on the head-up display.

This provides the advantage that the user may be presented with an image by a head-up display that appears to be undistorted when they move their head to various positions. According to a further aspect of the invention there is provided a method of transforming images for display on a head-up display, the method comprising: obtaining positional data representative of a current position of an eye of a user within the eye-box; determining a transformation in dependence on the positional data; outputting a transformation signal for applying the transformation to image data representative of an image to generate transformed image data representative of a transformed image to be displayed on the head- up display.

This applies a distortion correction based on the position of an eye of a user, to account for distortion by moving within the eye-box.

According to another aspect of the invention there is provided a method of transforming images for display on a head-up display, the method comprising: obtaining positional data representing a current position of an eye of a user; determining a transformation in dependence of the positional data; applying the transformation to image data representing an image to generate transformed image data representing a transformed image; and providing an output signal representing the transformed image.

According to yet another aspect of the invention there is provided a method of transforming images for display on a head-up display, the method comprising: obtaining positional data representing a current position of an eye of a user; determining a transformation in dependence of the positional data; applying the transformation to image data representing an image to generate transformed image data representing a transformed image; and displaying the transformed image on a display device of a head-up display.

In some embodiments the method comprises comparing the obtained positional data with stored positional data to obtain the transformation. This provides the advantage that transformations for a number of different eye-positions may be previously determined in a calibration process and stored, so that a transformation corresponding to positional data only needs to be retrieved.

In some embodiments the transformation is stored with corresponding positional data during a calibration process. In some embodiments the transformation is determined by looking up the positional data in a look-up table.

In some embodiments the transformation is an approximation to an inverse transformation of a distortion caused by the optical components of the head-up display for the current position of the eye relative to the head-up display. This provides the advantage that the transformation may compensate for the distortion caused by the optical components of the head-up display so that the user is provided with an apparently undistorted image.

In some embodiments the method comprises outputting a positioning signal for causing movement of a moveable element of the head-up display to adjust the position of an eye-box of the head-up display responsive to the current position of the eye of the user. This provides the advantage that the user is able to move their head over increased distances and still see the whole of the image provided by the head-up display. In some embodiments the method comprises comparing the position of the moveable element with stored positions of the moveable element to obtain the transformation. This provides the advantage that varying image distortion created by movement of the moveable element may be compensated for. In some embodiments the method comprises looking up the position of the eye and/or the position of the element in a look-up table to obtain the transformation which is applied to the image data. In some embodiments the positional data identifies a current two-dimensional position of an eye of the user. This provides the advantage that the transformation may provide compensation for distortions that depend upon the two-dimensional position of the eye of the user, such as up-and-down and left-and-right. In some embodiments the output signal is provided to a head-up display having an image display device and optical elements configured to apply a distortion to images displayed on the image display device; and the distortion is an approximation to an inverse transformation of the transformation applied to the image data. This provides the advantage that the transformation may compensate for the distortion caused by the optical components of the head-up display so that the user is provided with an apparently undistorted image.

In some embodiments the optical elements include a windscreen or windshield of a vehicle.

In some embodiments the head-up display comprises a moveable element and the method comprises providing a positioning signal that depends on the positional data to the head-up display to cause the head-up display to adjust the position of a moveable element in dependence on the positioning signal.

In some embodiments the moveable element comprises a mirror. This provides the advantages of a simple means of moving the eye-box and in system where a manually adjustable mirror already exists, the cost of providing the automated movement may be reduced.

According to a further aspect of the invention there is provided an apparatus for transforming images for display on a head-up display, the apparatus comprising a control means configured to: obtain positional data representative of a current position of an eye of a user; determine a transformation in dependence on the positional data; output a transformation signal for applying the transformation to image data representative of an image to generate transformed image data representative of a transformed image to be displayed on the head- up display.

This provides the advantage that the user may be presented with an image by a head-up display that appears to be undistorted when they move their head to various positions.

According to a yet further aspect of the invention there is provided an apparatus for transforming images for display on a head-up display, the apparatus comprising a control means configured to: obtain positional data representative of a current position of an eye of a user within the eye-box; determine a transformation in dependence on the positional data; output a transformation signal for applying the transformation to image data representative of an image to generate transformed image data representative of a transformed image to be displayed on the head-up display. This applies a distortion correction based on the position of an eye of a user, to account for distortion by moving within the eye-box.

In some embodiments the control means is configured compare the obtained positional data with stored positional data to obtain the transformation.

In some embodiments the control means is configured to look up the positional data in a look-up table to obtain the transformation applied to the image data.

In some embodiments the transformation is an approximation to an inverse transformation of an image distortion caused by the optical components of the head-up display with the current position of the eye of the user relative to the head-up display. This provides the advantage that the transformation may compensate for the distortion caused by the optical components of the head-up display so that the user is provided with an apparently undistorted image.

In some embodiments the control means is configured to output a positioning signal for causing movement of a moveable element of the head-up display to adjust the position of an eye-box of the head-up display responsive to the current position of the eye of the user. This provides the advantage that the user is able to move their head over increased distances and still see the whole of the image provided by the head-up display.

In some embodiments the control means is configured to compare the position of the moveable element with stored positions of the moveable element to obtain the transformation. This provides the advantage that varying image distortion created by movement of the moveable element may be compensated for.

In some embodiments the control means is configured to look up the positional data and/or the position of the element in a look-up table to obtain the transformation which is to be applied to the image data.

In some embodiments the apparatus is configured to receive a signal from an imaging means and obtain the positional data from the signal. This provides the advantage that the position of the user's eyes may be determined without requiring any additional effort or input from the user.

In some embodiments the positional data identifies a current two-dimensional position of an eye of the user. This provides the advantage that the transformation may provide compensation for distortions that depend upon the two-dimensional position of the eyes of the user, such as up-and-down and left-and-right.

In some embodiments the control means is configured to determine the positional data by analysing image data to identify a representation of an eye of the user. This provides the advantage that the position of the user's eyes may be determined without requiring any additional effort or input from the user.

In some embodiments the control means comprises an electronic processor and an electronic memory device coupled to the electronic processor and having instructions stored therein.

According to a yet further aspect of the invention there is provided a system comprising the apparatus of any one of the previous paragraphs and a head-up display comprising an image display device, wherein the head-up display is configured to display on the image display device the transformed image.

In some embodiments the head-up display has optical elements configured to apply a distortion to images displayed on the image display device and the distortion approximates to the inverse transformation applied to the image data.

In some embodiments the optical elements include a windscreen or windshield of a vehicle. In some embodiments the head-up display comprises a moveable element; the control means is configured to provide a positioning signal that depends on the positional data; and the position of the moveable element is adjustable in dependence on the positioning signal.

In some embodiments the head-up display comprises a moveable element; the control means is configured to provide a positioning signal that depends on the positional data; and the head-up display is arranged to adjust the position of the moveable element in dependence on the positioning signal.

In some embodiments the moveable element comprises a mirror.

In some embodiments the system comprises an imaging means positioned on or within a vehicle to capture an image containing a representation of an eye of the user of the vehicle and processing means configured to process the captured image to generate the positional data.

According to yet another aspect of the invention there is provided a vehicle comprising the system according to any one of the previous paragraphs.

According to still further aspect of the invention there is provided a computer program which when executed by a processor causes the processor to perform the method of any one of the previous paragraphs.

According to another aspect of the invention there is provided a non-transitory computer- readable storage medium having instruction stored therein which when executed on a processor cause the processor to perform the method of any one of the previous paragraphs.

In accordance with a still further aspect of the present invention there is provided an apparatus for controlling the position of an eye-box of a head-up display, the apparatus comprising an electronic processor having an electrical input for receiving one or more signals each indicative of a value a current position of an eye of a user; and an electronic memory device electrically coupled to the electronic processor and having instructions stored therein, wherein the processor is configured to access the memory device and execute the instructions stored therein such that it becomes configured to: determine a transformation in dependence on the positional data; and to output a transformation signal for applying the transformation to image data representative of an image to generate transformed image data representative of a transformed image to be displayed on the head- up display.

Within the scope of this application it is expressly intended that the various aspects, embodiments, examples and alternatives set out in the preceding paragraphs, in the claims and/or in the following description and drawings, and in particular the individual features thereof, may be taken independently or in any combination. That is, all embodiments and/or features of any embodiment can be combined in any way and/or combination, unless such features are incompatible. The applicant reserves the right to change any originally filed claim or file any new claim accordingly, including the right to amend any originally filed claim to depend from and/or incorporate any feature of any other claim although not originally claimed in that manner.

BRIEF DESCRIPTION OF THE DRAWINGS

One or more embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings, in which:

Fig. 1 shows a schematic side view of a vehicle comprising a head-up display and an apparatus comprising a control means in accordance with an embodiment of the invention; Fig. 2 shows a further schematic side view of the vehicle and user shown in Fig. 1 ;

Fig. 3 shows a schematic plan view of the vehicle shown in Fig. 1 ; Fig. 4 shows a further schematic plan view of the vehicle shown in Fig. 3;

Fig. 5 shows an image captured by the imaging means and illustrates an example of how the positional data indicative of a position of the eyes of the user are determined in accordance with an embodiment of the invention;

Fig. 6 shows an example of a calibration image for use in an embodiment of the invention;

Fig. 7 shows an example of a detected image that has been captured by an imaging device forming part of an embodiment of the invention;

Fig. 8 shows a diagram illustrating functional blocks of a system comprising an imaging means, a control means and a head-up display in accordance with an embodiment of the invention; Fig. 9 shows a schematic diagram of an apparatus comprising a control means in accordance with an embodiment of the invention;

Fig. 10 shows a flowchart illustrating a method of controlling the position of an eye-box of a head-up display in accordance with an embodiment of the invention;

Fig. 1 1 shows a flowchart of a method in accordance with an embodiment of the invention;

Fig. 12 shows a flowchart of a method in accordance with an embodiment of the invention; Fig. 13 shows a flowchart illustrating a method of transforming images for display on a head- up display in accordance with an embodiment of the invention;

Fig. 14 shows a flowchart of a method in accordance with an embodiment of the invention; Fig. 15 shows a diagram illustrating functional blocks of a system in accordance with an embodiment of the invention; and

Fig. 16 shows a diagram illustrating functional blocks of a system in accordance with an embodiment of the invention.

DETAILED DESCRIPTION

The Figures illustrate an apparatus 101 for controlling the position of an eye-box 102 of a head-up display 103, the apparatus 101 comprising a control means 1 14 configured to: obtain positional data representative of a current position of at least one eye 104 of a user 105; and provide a positioning signal in dependence on the positional data for causing movement of a moveable element 106 of a head-up display 103 to adjust the position of the eye-box 102 of the head-up display 103 relative to the position of the at least one eye 104 of the user 105.

The Figures also illustrate an apparatus 101 for transforming images for display on a head- up display 103, the apparatus 101 comprising a control means 1 14 configured to: obtain positional data representative of a current position of at least one eye 104 of a user 105; determine a transformation in dependence on the positional data; and output a transformation signal for applying the transformation to image data representative of an image to generate transformed image data representative of a transformed image to be displayed on the head-up display 103. A vehicle 106 including a system 120 comprising a head-up display 103 is shown in a schematic side view in Fig. 1 . The head-up display 103 comprises a display device 107, which may comprise a light emitting diode (LED) display, a liquid crystal display (LCD), an organic light emitting diode (OLED) display or another type of illuminated display, as is known in the art. The head-up display 103 also comprises a moveable optical element 108 for directing the light emitted by the display device 107 onto a windshield 109 where it is reflected towards the eyes 104 of a user 105 of the vehicle 106 when seated in a front seat 1 10 of the vehicle 106. In the present embodiment, the user 105 is the driver 105 of the vehicle 106 and the moveable optical element 108 is arranged to direct the light emitted by the display device 107 onto the windshield 109 where it is reflected towards the eyes 104 of the user 105 of the vehicle 106 when seated in the user's seat 1 10. Consequently, the image displayed by the display device 107 is presented to the user 105 as a virtual object 1 13 that appears to be located on the outside of the windshield 109 of the vehicle 106. In the present embodiment, the moveable optical element 108 comprises a mirror which reflects the light from the display device 107 towards the windshield 109. Typically, the mirror is a part of a mirror galvanometer, or the mirror is mounted on a motorized gimbal, to enable it to be reoriented. In the present embodiment, the moveable optical element 108 is the only optical element on the path of the light from the display device 107 to the windshield 109, but it will be appreciated that other embodiments may have more than one optical element along this light path.

The head-up display 103 also comprises actuation means 1 1 1 which is configured to enable adjustment of the orientation of the moveable optical element 108 so that the direction of the light leaving the moveable optical element 108 may be adjusted. This enables the position of the eye-box 102 of the head-up display 103 to be adjusted so that the eyes 104 of the user 105 are positioned within the eye-box 102 to provide the user 105 with a clear view of the image displayed by the head-up display 103. In the present embodiment, the moveable optical element 108 is adjustable about a lateral axis (in a direction into the paper as viewed in Fig. 1 ) and adjustable about a second axis 1 12 substantially perpendicular to the lateral axis. The actuation means 1 1 1 may comprise electric motors, or electric stepper motors arranged to adjust the orientation of the moveable optical element 108 in dependence on signals received by the actuation means 1 1 1 . The head-up display 103 operates in dependence on signals provided by the control means

1 14, which provides the signals in dependence on signals it receives from an imaging means

1 15. The imaging means 1 15 comprises one or more cameras located within the vehicle 106 which are configured to capture images of the face of the user 105 in order to obtain information defining a position of at least one of the user's eyes. Image data representing images captured by the imaging means 1 15 are analyzed to identify a representation of at least one eye 104 of the user 105. Positional data, which may comprise 2-dimensional coordinates of the representation of the at least one eye 104 within the image are thus determined. This analysis may be performed by the control means 1 14 in dependence on receiving image data from the one of more cameras providing the imaging means 1 15, or the analysis may be performed by one or more processors located within the imaging means 1 15 and the positional data is received by the control means 1 14 from a processor of the imaging means 1 15. In this latter case, the one or more processors of the imaging means 1 15 may be regarded as a part of an apparatus 101 comprising the control means 1 14.

The control means 1 14 is configured to provide one or more signals to the head-up display 103 in dependence on the positional data obtained from the image data. In the present example, it provides two different output signals in dependence on positional data indicating a position of one or both eyes 104 of the user 105. The first of the two signals is supplied to the actuation means 1 1 1 to cause the actuation means 1 1 1 to rotate the moveable optical element 108, so that the position of the eye-box 102 of the head-up display 103 is adjusted relative to the position of the at least one eye 104 of the user 105.

It will be appreciated that vehicle windshields are typically formed as glass or other transparent members having compound curves, with the screen being curved from top to bottom as well as from side to side. Where the radius of curvature of the windshield 109 varies across the vehicle, so an image reflected from a display device 107 onto that windshield 1 -9 towards the user will be affected, and this can be particularly noticeable to a user if the point on which that image is reflected is varied around the windshield 109 in use. Thus, a potential problem with this movement of the eye-box 102 is that the region of the windshield 109 used to reflect the light towards the user 105 will vary, changing as the user moves their head, and the windshield 109 is curved, typically with radii of curvature of the windshield 109 differing from one point to another point. Also, angles at which the light is reflected off the windshield 109 will be altered when the eye-box is repositioned. Consequently the image displayed by the display device 107 may be distorted in varying ways before it reaches the user's eyes 104, depending upon the positioning of the eye-box 102.

For example, the user 105 is shown in Fig. 1 seated in an upright position and the orientation of the moveable optical element 108 has been adjusted by the actuation means 1 1 1 , so that the user's eyes 104 are positioned within the eye-box 102. Light from the moveable optical element 108 is reflected off a first region 1 16 of the windshield 109 at first angles 1 17. The vehicle 106 and user 105 of Fig. 1 are shown in Fig. 2, but with the user 105 in a slouched position and therefore with a lower eye level within the vehicle 106 relative to the imaging means 1 15. The orientation of the moveable optical element 108 has been adjusted by the actuation means 1 1 1 , so that the user's eyes 104 are once again positioned within the eye-box 102. Light from the moveable optical element 108 is reflected off a second region 201 , lower down the windshield 109, and at second angles 202 to the windshield 109, which are larger than the first angles 1 17.

Figs. 3 and 4 additionally illustrate how the lateral position assumed by the user 105 also affects the region of the windshield 109 that is used to reflect the light from the display device 107 and the angles at which the light is reflected off the windshield 109. A schematic plan view of the vehicle 106 and the head 301 of the user 105 is shown in Fig. 3 with the user's head 301 positioned towards the middle of the vehicle 106, and a similar schematic plan view is shown in Fig. 4 with the user's head 301 positioned nearer to the user's side door 302.

In the example of Fig. 3 the moveable optical element 108 has been tilted about the axis 1 12 to direct light onto the windshield 109 and reflect it from a region 303 (shown hatched) towards the more central position taken up by the eyes 104 of the user 105. In the example of Fig. 4 the moveable optical element 108 has been tilted about the axis 1 12 to direct light onto the windshield 109 and reflect it from another region 304 (shown hatched) towards the position adjacent to the door 302, taken up by the eyes 104 of the user 105.

Thus, varying regions of the windshield 109 are used to reflect light towards the user's eyes 104, and the light is reflected at varying angles from the windshield 109, in dependence on the positioning of the user's eyes 104 and the positioning of the moveable optical element 108. Consequently, the image displayed by the display device 107 is distorted in correspondingly various ways by the reflection in the windshield 109, depending on the position of the user's eyes 104 and the position of the moveable optical element 108.

In view of the varying optical distortion produced by the varying region of reflection on the windshield 109, the second of the two signals comprises a transformation signal for applying a transformation to image data that represents an image that is to be displayed on the display device 107 of the head-up display 103. The resulting transformation to the image data causes a distortion of the image displayed by the display device 107 in an opposite sense to the distortion created by the optical components, including the windshield 109, i.e. the image to be presented by the display device 107 is transformed by a transformation that is the inverse of the transformation produced by the optical distortion. Consequently, the image observed by the user 105 appears to the user to be free of distortion.

The transformation signal provided by the control means 1 14 comprises a transformation that is determined by the control means 1 14 in dependence on the positional data obtained from the analysis of the image captured by the imaging means 1 15.

An example of how the positional data indicative of a position of the eyes 104 of the user 105 are determined is illustrated in Fig. 5, which illustrates an image captured by the imaging means 1 15. Firstly, the image is analyzed to detect a number of features of the user's eyes 104 that define a border geometry (represented by rectangle 501 ) surrounding the user's eyes 104. For example, the analysis may identify the highest and lowest points of a user's eyes 104, the leftmost point of the left eye and the rightmost point of the right eye and define the border geometry as an upright rectangle having an edge passing through each of these points. Positional data, which may comprise 2-dimensional co-ordinates of the border geometry 501 within a field of view 502 are determined. For example, where the border geometry 501 comprises a rectangle, the 2-dimensional co-ordinates may be the coordinates of the centre of the rectangle. This process of analyzing the image to determine the positional data is typically performed by one or more processors within the imaging means 1 15. In alternative embodiments, the positional data defines a 1 -dimensional position and the automated repositioning of the eye-box 102 is only in 1 dimension, along a vertical or horizontal axis with respect to the vehicle 106.

The position of the eye-box 102 of the head-up display 103 is adjusted by the control means 1 14 when the current positions of the eyes 104 of the user 105 are not aligned with the current position of the eye-box 102. In the example of Fig. 5, the control means 1 14 is arranged to determine 2-dimensional co-ordinates of a position (illustrated by an "X" 504) of one or both eyes 104 of the user 105. A system (120 in Fig. 1 ) comprising imaging means 1 15, the head-up display 103 and the control means 1 14 is calibrated for a finite number of different positions 503 of the eye-box 102. For example, central points 503 of the different positions of the eye-box 102 are illustrated in Fig. 5. When the current position 504 of the eyes 104 is within a threshold distance of the current central point 503A of the eye-box 102, the control means 1 14 may be arranged to maintain the current position of the eye-box 102. When the current position 504 of the eyes 104 is not within the threshold distance of the current central point 503A of the eye-box 102, the control means 1 14 may be arranged to adjust the position of the eye-box 102 to the calibrated position nearest to the current position 504 of the eyes 104.

In the present embodiment, the system (120 in Fig. 1 ) is calibrated to compensate for the optical distortion caused by the head-up display 103 for each of the finite number of calibrated positions with central points 503 of the eye-box 102. That is, for each calibrated position, the image displayed by the display device 107 may be distorted in a different way by the reflection in the windshield 109, depending on the position of the user's eyes 104 and the position of the moveable optical element 108. Thus, for each calibrated position, the control means 1 14 determines a corresponding transformation signal for applying a transformation to image data that represents an image to be displayed on the display device 107 of the head-up display 103.

To calibrate the system 120 in this way, a calibration image may be displayed on the display device 107 of the head-up display 103 and an imaging device such as a camera (not shown) is located at each of the central points 503 in turn, while the head-up display 103 is arranged to position the centre of the eye-box 102 at that point 503. The imaging device is arranged to detect the image projected by the head-up display 103, and the detected image will typically be distorted.

An example of a calibration image 601 is shown in Fig. 6. The calibration image 601 comprises a pattern defining a regular array of accurately positioned features 602. In the present example the features 602 are circles 602 arranged in a square array.

An example of a detected image 701 that has been captured by the imaging device during calibration is shown in Fig. 7. A grid 702 of squares is also shown which illustrates the distortion produced by the head-up display 103. The grid 702 has been chosen such that, in a non-distorted image, the centres of the circles 602 would coincide with the vertices 703 of the squares of the grid 702. However, due to the distortion produced by the optical components of the head-up display 103, most of the centres of the circles 602 of the detected image 701 are separated from the corresponding vertex. For example, the centre of a first circle 602A is separated from a corresponding vertex 703A by a displacement vector 704A and the centre of a second circle 602B is separated from a corresponding vertex 703B by a displacement vector 704B. The calibration process may therefore determine a displacement vector, such as vectors 704A and 704B, for each of the circles 602. These displacement vectors represent a transformation caused by the optical components of the head-up display 103 to the original displayed image 601 . Therefore an approximation to the transformation caused by the optical components of the head-up display 103 may be determined.

Determination of the image transformation (e.g. a set of head-up display specific distortions) may be achieved by capturing the image (e.g. on a camera) at the eye-box corners and edges. Sampling between the centre of the eye-box out towards the corners/edges may be used to understand the relationship of the distortion. The positions of the circles in the image may be detected through a circle detection algorithm and used to compute the transform. The inverse transform can then be computed through a common mathematics process from each sample.

As an alternative to projecting a calibration image and capturing the resulting image, the image transformation caused by the optical system of the head-up display 103 may be determined by software, such as SPEOS or ZEMAX, which models the propagation of rays through the optical system. This software is used to simulate the level of distortion in the optical system from each one of a set of positions that the eyes of the user may assume during use.

In each of these ways an inverse transformation may be determined and stored for each of the calibrated positions (points 503 in Fig. 5) of the system (120 in Fig. 1 ).

Thus, during use, when the head-up display 103 is arranged to position its eye-box 102 at, or adjacent to, any one of the calibrated positions having a central point 503, a corresponding inverse transformation may be applied to the image to be displayed by the display device 107 of the head-up display 103, and because the transformation applied to the image to be displayed by the display device 107 approximates to the inverse of the transformation applied by the optical components of the head-up display 103, the image observed by the user 105 appears to be free of distortion. The inverse transform, which is applied to the image that is to be displayed by the display device 107, is selected in dependence on a nearest neighbour algorithm using the current eye position of the user 105.

A diagram illustrating functional blocks of an embodiment of the system 120 comprising the imaging means 1 15, the control means 1 14 and the head-up display 103 is shown in Fig. 8. A picture generator 801 provides image data for display by the display device 107 of the head-up display 103. In the present example, the image data generated by the picture generator 801 is generated in dependence on one or more signals comprising information received from one or more other systems 803. For example, the one or more signals may be indicative of the current road speed received from another system, such as an antilock braking system (ABS) or speedometer, or indicative of a selected gear and received from a transmission control module (TCM). The picture generator 801 generates image data representing a graphical image that illustrates the information in a format determined by graphical data stored in a memory device 802.

An image analysing means 805 receives images captured by the imaging means 1 15 and analyses each of the images to generate the positional data representative of a current position of at least one eye 104 of a user 105.

In the present example, a positioning determination means 806 receives the positional data and provides a positioning signal in dependence on the positional data for causing movement of the moveable element 108 of the head-up display 103 to adjust the position of the eye-box 102 of the head-up display 103 relative to the position of the eyes 104 of the user 105. The positioning determination means 806 may compare the positional data with data defining the central points (503 in Fig. 5) of the calibrated positions to determine if the current position of the eyes 104 is within a threshold distance of the centre of the eye-box 102. If it is not, then the positioning determination means 806 may provide an output signal to the actuation means 1 1 1 to cause it to move the moveable element 108 of the head-up display 103 to position the centre of the eye-box 102 at the central point (503 in Fig. 5) of the calibrated position nearest to the position of the eyes 104 of the user 105. In the present embodiment, the positional data generated by the image analysing means 805 is also provided to a transformation determination means 807 configured to determine a transformation in dependence on the positional data and to output a transformation signal for applying the transformation to image data. The transformation may be determined by retrieving transformation data stored in a memory device 808 in dependence on the received positional data. For example, the transformation data may have been produced in a calibration process as described above with regard to Figs. 6 and 7 and stored in a look-up table in the memory device 808. Thus, the transformation determination means 807 may be configured to retrieve transformation data, corresponding to the positional data, from the look-up table.

A picture transformation means 809 is configured to receive the transformation signal from the transformation determination means 807 and image data from the picture generator 801 and apply the transformation to the image data to generate transformed image data representative of a transformed image to be displayed on the head-up display 103. Thus, the picture transformation means 809 provides a signal to the display device 107 to cause it to display a transformed image.

In an alternative embodiment, the positioning determination means 806 may be configured to provide an output signal dependent on the positional data to continuously keep moving the centre of the eye-box 102 to the position of the eyes 104. In this case, the transformation determination means 807 may be configured to determine which one of the calibrated positions has a central point (503 in Fig. 5) nearest to the current eye position and output a transformation signal comprising transformation data corresponding to that calibrated position.

Apparatus 101 comprising the control means 1 14 is shown schematically in Fig. 9. The control means 1 14 comprises one or more electronic processors 902 and one or more electronic memory devices 903. A computer program 904 comprising instructions is stored in the memory device 903 and the one or more electronic processors 902 are configured to execute the instructions and perform at least the positioning determination means 806 and/or the transformation determination means 807 described above and shown in Fig. 8 and/or any one of the methods described below with reference to Figs. 10 to 14. In embodiments in which the control means 1 14 comprises several processors, the processors may be located within a single module or may be distributed over several different modules. For example, the image analysing means (805 of Fig. 8) may be performed by a processor 902 of the control means 1 14 that is located within a camera 1 15 configured to capture images of the eyes 104 of the user 105, while the positioning determination means 806 and/or the transformation determination means 807 shown in Fig. 8 may be located within a unit that includes the display device 107 of the head-up display 103. For example, one or more processors 902 of the control means 1 14 may be located within a unit that includes the display device 107 of the head-up display 103, and the one or more processors 902 may also be configured to perform the picture generation performed by the picture generator 801 and the processes performed by the picture transformation means 809 and the transformation determination means 807.

In the illustrated embodiment, the apparatus 101 also comprises input/output means 905 for receiving and transmitting communications to other electronic devices. The input/output means 905 may comprise one or more transceivers for communicating with other devices over data buses, such as a controller area network bus (CAN bus) of the vehicle 106.

The computer program 904 may be transferred to the memory device 903 via a non- transitory computer readable medium, such as a CD-ROM 906 or a portable memory device, or via a network, such as a wireless network.

A flowchart illustrating a method 1000 of controlling the position of an eye-box of a head-up display, performable by the control means 1 14, is shown in Fig. 10. The method 1000 comprises, at block 1001 , obtaining positional data representing a current position of one or more eyes of a user. This process may comprise receiving positional data from a processor that is configured to perform an analysis of an image captured by an imaging means.

Alternatively, the process at block 1001 may comprise the processes illustrated in the flowchart of Fig. 1 1 . Thus the method 1000 may comprise, at block 1 101 of process 1001 , receiving from an imaging means an image signal from which positional data is obtainable, and, at block 1 102, analysing image data contained within the image signal to identify a representation of at least one eye of the user. At block 1 103, the process 1001 comprises obtaining the positional data representative of a current position of an eye of a user from the received image signal.

Returning to Fig. 10, the method 1000 also comprises, at block 1002, causing movement of a moveable element of a head-up display in dependence on the positional data to adjust the position of the eye-box of the head-up display relative to the vehicle and maintain colocation of the positional data with the eye-box, that is to say, match the position of the eye-box with the current position of the one or more eyes of the user. The method 1000 is typically repeatedly performed; each time using the most recently received positional data obtained from the most recently captured image. Thus, the method 1000 repeatedly provides positioning signals, or continuously provides a positioning signal, to adjust the position of the eye-box of the head-up display. The process at block 1002 may comprise the processes illustrated in the flowchart of Fig. 12. At block 1201 , the obtained positional data is compared with stored positional data to obtain a position value representative of a position of the moveable element of the head-up display associated with the obtained positional data. The process at block 1201 may comprise looking up the positional data in a stored look-up table to obtain the position value.

The process 1002 also comprises, at block 1202, providing the positioning signal in dependence on the position value. Thus, a positioning signal may be provided to a head-up display to cause the position of the eye-box of the head-up display to be moved in dependence on the positional data.

A flowchart illustrating a method 1300 of transforming images for display on a head-up display, performable by the control means 1 14, is shown in Fig. 13.

At block 1301 of the method 1300, positional data representative of a current position of at least one eye of a user is obtained. The process at block 1301 of the method 1300 may be the same as the process performed at block 1001 of the method 1000, as described above. At block 1302, the method 1300 determines a transformation in dependence on the positional data obtained at block 1301 . This process may be as described above with reference to Figs. 6 and 7. At block 1303, the method 1300 outputs a transformation signal for applying the transformation to image data representative of an image, in order to generate transformed image data representative of a transformed image to be displayed on a head-up display. The method 1300 is typically repeatedly performed; each time using the most recently received positional data obtained from the most recently captured image. Thus, the method 1300 repeatedly provides an output signal that causes the head-up display to transform the image in dependence on the most recently determined positions of the eyes of the user. Examples of the process at block 1302 of the method 1300 are illustrated in the process block 1401 of Fig. 14. Thus, the process at 1302 may comprise looking up the positional data and/or the position of the moveable element in a look-up table to obtain the transformation to be applied to the image data at block 1303. A diagram illustrating functional components of an alternative system 120A is shown in Fig. 15. Components common to both the system 120 and system 120A have been provided with the same reference signs. Like the system 120, the system 120A has an image analyzing means 805 which receives images captured by the imaging means 1 15 and analyses each of the images to generate the positional data representative of a current position of at least one eye 104 of a user 105.

A positioning determination means 806 receives the positional data and provides a positioning signal in dependence on the positional data for causing movement of the moveable element 108 of the head-up display 103 to adjust the position of the eye-box 102 of the head-up display 103 so as to remain is view of the eyes 104 of the user 105 regardless of the user moving their head within the vehicle in use.

The system 120A also has a picture generator 801 that provides image data for display by the display device 107 of the head-up display 103. The image data generated by the picture generator 801 may be generated in dependence on one or more signals comprising information received from one or more other systems 803. The picture generator 801 generates image data representing a graphical image that illustrates the information in a format determined by graphical data stored in a memory device 802. Unlike in the system 120, the image data generated by the picture generator 801 is provided to the display device 107 of the head-up display 103 without being transformed beforehand. Thus, the image observed by the user 105 may at times appear to be distorted depending on the position of the eyes of the user.

However, the system 120A, like system 120, ensures that the user 105 is able to view the image provided by the head-up display 103, by adjusting the position of the eye-box 102 in dependence on the position of the user's eyes 104. A diagram illustrating functional components of another alternative system 120B is shown in Fig. 16. Components common to both the system 120 and system 120B have been provided with the same reference signs. Like the system 120, the system 120B has a picture generator 801 that provides image data for display by the display device 107 of the head-up display 103. The image data generated by the picture generator 801 may be generated in dependence on one or more signals comprising information received from one or more other systems 803. The picture generator 801 generates image data representing a graphical image that illustrates the information in a format determined by graphical data stored in a memory device 802. The system 120B also includes an image analysing means 805 which receives images captured by the imaging means 1 15 and analyses each of the images to generate the positional data representative of a current position of at least one eye 104 of a user 105.

The positional data generated by the image analyzing means 805 is provided to a transformation determination means 807 configured to determine a transformation in dependence on the positional data and to output a transformation signal for applying the transformation to image data. The transformation may be determined by retrieving transformation data stored in a memory device 808 in dependence on the received positional data.

The transformation data may be previously produced and stored in a calibration process similar to that described above with reference to Figs. 5, 6 and 7. However, in this instance, the camera used for the calibration process is moved between calibration positions (similar to points 503 in Fig. 5), while the eye-box 102 of the head-up display 103 remains stationary, and the camera is caused to capture images (similar to image 701 in Fig. 7) of the calibration image (601 in Fig. 6). The transformation is then determined from the displacements (similar to vectors 704A and 704B in Fig. 7) measured in those captured images. This process may be performed for a number of different static positions of the head-up display 103.

Returning to Fig. 16, a picture transformation means 809 is configured to receive the transformation signal from the transformation determination means 807 and image data from the picture generator 801 and apply the transformation to the image data to generate transformed image data representative of a transformed image to be displayed on the head- up display 103. Thus, the picture transformation means 809 provides a signal to the display device 107 to cause it to display a transformed image.

Unlike the system 120, the system 120B does not include a positioning determining means 806 for controlling the position of the moveable optical element 108 of the head-up display 103. However, the picture transformation means 809 is still considered to be advantageous, particularly in a system having a head-up display with a relatively large eye-box 102, in which the user 105 can move his eye-position by substantial distances and still see the whole of the displayed image. As the user 105 moves their head (up and down and/or left and right) the apparent distortion produced by the optical components (and particularly the windshield 109) of the head-up display 103 is likely to vary depending upon the position of the eyes 104 of the user 105, even though the eye-box 102 remains stationary. However, by transforming the image data, using an approximation to an inverse transformation of that the transformation produced by the optical components of the head-up display 103, the system 120B is able provide the user 105 with a substantially undistorted view of the image.

For purposes of this disclosure, it is to be understood that the controller(s) or control means described herein can each comprise a control unit or computational device having one or more electronic processors. A vehicle and/or a system thereof may comprise a single control unit or electronic controller or alternatively different functions of the controller(s) may be embodied in, or hosted in, different control units or controllers. A set of instructions could be provided which, when executed, cause said controller(s) or control unit(s) to implement the control techniques described herein (including the described method(s)). The set of instructions may be embedded in one or more electronic processors, or alternatively, the set of instructions could be provided as software to be executed by one or more electronic processor(s). For example, a first controller may be implemented in software run on one or more electronic processors, and one or more other controllers may also be implemented in software run on or more electronic processors, optionally the same one or more processors as the first controller. It will be appreciated, however, that other arrangements are also useful, and therefore, the present disclosure is not intended to be limited to any particular arrangement. In any event, the set of instructions described above may be embedded in a computer-readable storage medium (e.g., a non-transitory storage medium) that may comprise any mechanism for storing information in a form readable by a machine or electronic processors/computational device, including, without limitation: a magnetic storage medium (e.g., floppy diskette); optical storage medium (e.g., CD-ROM); magneto optical storage medium; read only memory (ROM); random access memory (RAM); erasable programmable memory (e.g., EPROM ad EEPROM); flash memory; or electrical or other types of medium for storing such information/instructions. The blocks illustrated in the Figs. 10 to 14 may represent steps in a method and/or sections of code in the computer program 904. The illustration of a particular order to the blocks does not necessarily imply that there is a required or preferred order for the blocks and the order and arrangement of the block may be varied. Furthermore, it may be possible for some steps to be omitted.

Although embodiments of the present invention have been described in the preceding paragraphs with reference to various examples, it should be appreciated that modifications to the examples given can be made without departing from the scope of the invention as claimed.

Features described in the preceding description may be used in combinations other than the combinations explicitly described.

Although functions have been described with reference to certain features, those functions may be performable by other features whether described or not.

Although features have been described with reference to certain embodiments, those features may also be present in other embodiments whether described or not. Whilst endeavoring in the foregoing specification to draw attention to those features of the invention believed to be of particular importance it should be understood that the Applicant claims protection in respect of any patentable feature or combination of features hereinbefore referred to and/or shown in the drawings whether or not particular emphasis has been placed thereon.