Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
APPARATUS FOR DISPLAY ADJUSTMENT AND METHOD THEREOF
Document Type and Number:
WIPO Patent Application WO/2017/026942
Kind Code:
A1
Abstract:
An apparatus and method for display adjustment, the apparatus comprising a display output; an input interface; a processor; and a memory for storing one or more applications, the processor being configured to execute the one or more applications to control the semiconductor device for obtaining eye information of a user with visual impairment including myopia, astigmatism, hyperopia, and/or presbyopia through the input interface; generating a compensated image corrected for myopia, astigmatism, hyperopia, and/or presbyopia, whichever applicable to the user, based on the obtained eye information; and outputting to the display output the compensated image for viewing by the user without visual aid on a display connected to the display output, wherein the one or more applications is operable in background to control the apparatus such that a plurality of selected images for outputting to the display output are generated according to the generation of the compensated image.

Inventors:
CHAI WEI KUO ANDREW (SG)
Application Number:
PCT/SG2016/050380
Publication Date:
February 16, 2017
Filing Date:
August 08, 2016
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
CHAI WEI KUO ANDREW (SG)
International Classes:
G06T5/00
Domestic Patent References:
WO2011156721A12011-12-15
Foreign References:
US20140098121A12014-04-10
US6160576A2000-12-12
US20140267284A12014-09-18
CN104156971A2014-11-19
US20090079764A12009-03-26
Other References:
"A method for enhancing digital information displayed to computer users with visual refractive errors via spatial and spectral processing.", 25 May 2007 (2007-05-25), XP055112763, Retrieved from the Internet [retrieved on 20160919]
Attorney, Agent or Firm:
CHANG JIAN MING (SG)
Download PDF:
Claims:
CLAIMS

1 . An apparatus for display adjustment, the apparatus comprising:

a display output;

an input interface;

a processor; and

a memory for storing one or more applications,

the processor being configured to execute the one or more applications to control the apparatus for

obtaining eye information of a user with visual impairment including myopia, astigmatism, hyperopia, and/or presbyopia through the input interface;

generating a compensated image corrected for astigmatism, hyperopia, and/or presbyopia, whichever applicable to the user, based on the obtained eye information; and

outputting to the display output the compensated image for viewing by the user without visual aid on a display connected to the display output,

wherein the one or more applications is configurable to operate in background to control the apparatus such that a plurality of selected images for outputting to the display output are generated according to the generation of the compensated image.

2. The apparatus of claim 1 , wherein the eye information are obtained by the user inputting the eye information to a user input device connected to the input interface.

3. The apparatus of claim 1 or 2, wherein the apparatus is controlled to operate as an eye wavefront analyzer for

determining a wavefront aberration function for each eye of the user to obtain a point spread function for each eye based on the eye information, wherein the eye information includes eye refractive index of each eye of the user.

4. The apparatus of claim 3, wherein the apparatus is controlled for

averaging values of the obtained information of the respective eye refractive indices;

determining an uncompensated image function from data of an uncompensated image;

determining an average point spread function for eyes of the user based on the averaged values and the point spread function for each eye;

applying Fourier Transform on the average point spread function determined to obtain an optical transfer function;

determining a distorted image function based on the uncompensated image function and the optical transfer function;

applying inverse Fourier Transform on the distorted image function to obtain a compensated image function for correction of myopia and/or astigmatism ; and

generating the compensated image corrected for myopia and/or astigmatism based on the compensated image function.

5. The apparatus of claim 3, wherein when the eye refractive index of each eye of the user are the same, the apparatus is controlled for

determining an uncompensated image function from data of an uncompensated image;

applying Fourier Transform on the point spread function common for each eye of the user to obtain an optical transfer function;

determining a distorted image function based on the uncompensated image function and the optical transfer function;

applying inverse Fourier Transform on the distorted image function to obtain a compensated image function for correction of myopia and/or astigmatism ; and

generating the compensated image corrected for myopia and/or astigmatism based on the compensated image function.

6. The apparatus of claim 3, 4 or 5, wherein Campbell's method is used to adjust Zernike coefficients of the wavefront aberration function to take into consideration viewing pupil size of an eye.

7. The apparatus of any one of the preceding claims, wherein when hyperopia or presbyopia is applicable to the user, generating the compensated image includes magnifying content in the compensated image based on a multiplier derived from the obtained eye information.

8. The apparatus of any one of the preceding claims, wherein the plurality of selected images is images in a video file.

9. The apparatus of any one of the preceding claims, wherein the apparatus is connectable to a microphone and the one or more applications is configurable to control the apparatus to select the plurality of selected images by voice command through the microphone.

10. The apparatus of any one of the preceding claims, wherein the apparatus is connectable to a camera and the one or more applications is configurable to control the apparatus to detect use of visual aid on the user through capturing and analyzing an image of the user and stop the generation of the plurality of selected images upon detection of the use of visual aid.

1 1 . The apparatus of any one of the preceding claims, wherein the apparatus is connectable to a camera and the one or more applications is configurable to control the apparatus to obtain the eye refractive index of the user as the eye information by the user capturing images of each eye of the user from directions including left side of the user's head facing the eyes of the user, front side of the user's head facing the eyes of the user and right side of the user's head facing the eyes of the user.

12. The apparatus of any one of the preceding claims, wherein the one or more applications is configured to control the apparatus to obtain input through the input interface to adjust clarity of the compensated image upon request by the user.

13. The apparatus of claim 12, wherein the input obtained through the input interface includes increment or decrement interval of a predetermined degree with respect to axis of astigmatism and/or in dioptres.

14. The apparatus of any one of the preceding claims, wherein generating the compensated image includes a step of adjusting contrast of grey tone, a step of adjusting contrast of background color, or a step of adjusting brightness, or any combination of said steps.

15. The apparatus of any one of the preceding claims, wherein the plurality of selected images includes all images to be outputted to the display output.

16. A mobile device for display adjustment, the mobile device comprising:

the display;

the apparatus of any one of the preceding claims; and

a user input device connectable to the input interface.

17. A method for display adjustment, the method comprising:

obtaining eye information of a user with visual impairment including myopia, astigmatism, hyperopia, and/or presbyopia;

generating a compensated image corrected for astigmatism, hyperopia, and/or presbyopia, whichever applicable to the user, based on the obtained eye information; displaying the compensated image for viewing by the user without visual aid on a display; and

operating in background one or more applications to control an apparatus such that a plurality of selected images for displaying on the display are generated according to the generation of the compensated image.

Description:
Apparatus For Display Adjustment And Method Thereof

FIELD

The present invention relates to an apparatus, a mobile device and a method for display adjustment, in particular, adjustment of one or more images to be displayed on a display for viewing by a user with visual impairment including myopia, astigmatism, hyperopia, and/or presbyopia.

BACKGROUND

Mobile devices and consumer electronics displays have a function that is limited to only allowing a user to adjust the size of the font or other content in an image displayed on a display of the mobile device or consumer electronics display. Such function is applicable for viewing by users with hyperopia only. A need therefore arises for a display which is enhanced with automatic adjustment to enable viewing by people with various types of visual impairments.

SUMMARY

The invention is defined in the independent claims. Some of the optional features of the invention are defined in the dependent claims.

According to an aspect of an example in the present disclosure, there is provided an apparatus for display adjustment, the apparatus comprising: a display output; an input interface; a processor; and a memory for storing one or more applications, the processor being configured to execute the one or more applications to control the apparatus for obtaining eye information of a user with visual impairment including myopia, astigmatism, hyperopia, and/or presbyopia through the input interface; generating a compensated image corrected for astigmatism, hyperopia, and/or presbyopia, whichever applicable to the user, based on the obtained eye information; and outputting to the display output the compensated image for viewing by the user without visual aid on a display connected to the display output, wherein the one or more applications is configurable to operate in background to control the apparatus such that a plurality of selected images for outputting to the display output are generated according to the generation of the compensated image.

According to another aspect of an example in the present disclosure, there is provided a mobile device for display adjustment, the mobile device comprising: the display; the apparatus; and a user input device connectable to the input interface.

According to another aspect of an example in the present disclosure, there is provided a method for display adjustment, the method comprising: obtaining eye information of a user with visual impairment including myopia, astigmatism, hyperopia, and/or presbyopia; generating a compensated image corrected for astigmatism, hyperopia, and/or presbyopia, whichever applicable to the user, based on the obtained eye information; displaying the compensated image for viewing by the user without visual aid on a display; and operating in background one or more applications to control an apparatus such that a plurality of selected images for displaying on the display are generated according to the generation of the compensated image.

BRIEF DESCRIPTION OF DRAWINGS

Embodiments of the present invention are described below, by way of example only, with reference to the accompanying drawings, in which;

FIG. 1 to FIG. 4 illustrate eye condition with visual impairments Hyperopia, Myopia, Astigmatism and Presbyopia respectively versus that of a normal and healthy eye with perfect vision.

FIG. 5 illustrates a an apparatus according to an embodiment; FIG. 6 is a flowchart of a process for correcting images for myopia and/or astigmatism based on obtained eye refractive index.

FIG. 7 is a flowchart of a process including magnification of images for correction of hyperopia and/or presbyopia based on obtained eye refractive index.

FIG. 8 is a flowchart of a process for correcting images for myopia and/or astigmatism based on obtained eye refractive index.

FIG. 9 is a flowchart of a process for automatic detection of visual aid worn by a user and for taking action based on the detection result.

DETAILED DESCRIPTION

An example of the present disclosure provides an apparatus, which can be a mobile device, operable by one or more applications that is coded in source code and executable on, for instance, the Android operating system and iPhone operating system, or the operating systems typically used for mobile devices. The one or more applications may be programmed to form part of one or more applications run by the operating systems and operable in background of the apparatus. Operating in background refers to running the one or more applications and allowing functions of the one or more applications to be carried out without affecting other operations of the apparatus. The apparatus includes devices such as mobile phones, tablet personal computers, laptops, Virtual Reality goggles/eyewear, and the like. The one or more applications when activated is configured to automatically manipulate one or more images to be displayed on a screen of a display to allow users with vision impairment to see the one or more images clearly as if they are wearing corrective lenses (or visual aids) for hyperopia, myopia, astigmatism and/or presbyopia. Activation of the one or more applications may be done through system settings of the respective operating system of the apparatus. In another example, the one or more applications may be an application downloadable from an online store and can be installed on the apparatus thereafter. The apparatus need not be an entire electronic device but may be a component or group of components in the electronic device. For instance, the apparatus can be a semiconductor integrated chipset in an electronic device.

Fig. 1 to Fig. 4 illustrate various out of focus images caused by Hyperopia (See Fig. 1 ), Myopia (See Fig. 2), Astigmatism (See Fig. 3) and Presbyopia (See Fig. 4) respectively versus that of a normal and healthy eye with perfect 6/6 vision. Examples of a mobile device and methods of automatically adjusting images on a display of the mobile device to automatically correct the images for the respective visual impairments including Hyperopia, Myopia, Astigmatism and Presbyopia is described as follows.

Fig. 5 is a block diagram illustrating system architecture of an apparatus 500 according to an embodiment. The apparatus 500 may be a mobile phone, mobile computer, laptop computer, internet pad, portable digital assistant, pager, electronic book viewer, wearable device, media player, and other similar type of electronic device.

The apparatus 500 comprises at least one processor or processor unit 502 configured to execute instructions and to carry out operations associated with the apparatus 500. The processor 502 may comprise means, such as a digital signal processor, one or more microprocessor device, and circuitry, for performing various functions described later. The processor 502 may control reception, transmission, and processing of input and output data between components of the apparatus 500 based on instructions stored in a system memory 504. The processor 502 can be implemented on a single-chip, multiple chips or multiple electrical components. Some examples of architectures which can be adopted for the processor 502 include those of dedicated or embedded processors, and/or application specific integrated circuit. The processor 502 includes an appropriate bus system for enabling processing capabilities of the processor 502.

The processor 502 may comprise functionality to operate one or more computer programs (also known herein as one or more applications). The source code of the one or more computer programs may be stored in the system memory 504. The system memory 504 and the one or more applications cooperate to provide one or more functions described below in conjunction with FIGS. 6 - 9. The processor 502 operates together with an operating system 522 with source code stored in the system memory 504 to execute the source code of the one or more applications.

The system memory 504, depending on the exact configuration and type of computing device, may be volatile (such as RAM), non-volatile (such as ROM, Flash Memory, etc) or some combination of the two, including a cache area for temporary storage of data. The system memory 504 may reside in the processor 502 or include memory devices externally connected to the processor 502. For instance, data could also reside on a non-removable storage 508 and/or a removable storage 510 externally connected to the processor 502. The system memory 504 may comprise one or more memory circuitries and it may be partially integrated with the processor 502.

The apparatus 500 comprises an input interface 514 and an output interface 51 6, is known collectively as an Input/Output (I/O) interface 512. The I/O interface 512 may include a Universal Serial Bus (USB), HDMI (High-Definition Multimedia Interface), and the like. The input interface 514 is connectable to one or more input devices, such as a microphone, a keyboard, a mouse, a keypad, and/or one or more buttons or actuators or touchscreen input device and the like. The output interface 516 is connectable to one or more output devices 516, such as a speaker, a display of a type including but not limited to LED, OLED, LCD or plasma display and the like, for displaying images and information, and the like. The display may be configured to operate as a touch screen.

The apparatus 500 may comprise a communication interface 518 comprising a transmitter and a receiver. An antenna (or multiple antennae) may be connected to the communication interface 518. The communication interface 518 may operate in accordance with wired line protocols, such as Ethernet and digital subscriber line, with third generation (3G) wireless communication protocols, with fourth generation (4G) wireless communication protocols, such as Long Term Evolution (LTE) advanced protocols, wireless local area networking protocols, short-range wireless protocols, such as Bluetooth, and the like. The processor 502 may control the communication interface 518 to connect to another source or communicate with other communication devices wirelessly or through wired connections.

The system memory 504 stores source code of an eye wavefront analyzer 524 and an image computational component 526, which are software components. The eye wavefront analyzer 524 is used in derivation of Zernike coefficients and corresponding Zernike polynomials representing a wavefront aberration function for a purpose of determining a point spread function of an eye of a user, which can be used to derive a compensated image function for generating a compensated image for displaying on a display (not shown in Fig. 5) connected to the input interface 514. The wavefront aberration function is determined based on eye refractive index obtained from a user input device (not shown in Fig. 5) through the input interface 514. The eye refractive index contains information on degree of visual impairment of a user with myopia, astigmatism, hyperopia and/or presbyopia. The computational imaging component 526 is used to compute the compensated image function for myopia and/or astigmatism from the point spread function or point spread functions determined by the eye wavefront analyzer 524. . The computational imaging component 526 is also used to derive a magnification factor or multiplier required for correction of hyperopia and/or presbyopia. Once obtained, the compensated image function may be further processed by the processor 502 to generate the compensated image to be displayed on the display.

In a case that a user has myopia and/or astigmatism and hyperopia and/or presbyopia, the compensated image function may be determined first to generate an image corrected for myopia and/or astigmatism. Thereafter, magnification of the image corrected for myopia and/or astigmatism to correct hyperopia and/or presbyopia can be performed on the image corrected for myopia and/or astigmatism. Alternatively, the magnification to correct hyperopia and/or presbyopia may be performed first, followed by generating the compensated image function of the magnified image.

The apparatus 500 may comprise other elements and or components not illustrated in FIG. 5, such as further interface for connecting with certain devices, a power source (e.g. battery), media capturing elements, video and/or audio playing modules, and/or a user identification module. FIG. 6 is a flowchart 600 illustrating an image correction process to generate one or more images corrected for myopia and/or astigmatism for displaying on a display connected to the output interface 51 6 of the apparatus 500 in Fig. 5. For purposes of illustration, the apparatus (500 in Fig. 5) is taken to be a mobile device for Fig. 6. This image correction process is executed by one or more applications stored in the system memory (504 in Fig. 5) of the apparatus 500.

The image correction process begins at a step 602 where a user starts up a graphical user interface of the apparatus (500 in Fig. 5) displayed on the connected display.

In a step 604, the user operates the graphical user interface to make system settings for the apparatus (500 in Fig. 5) by inputting eye information of various visual impairment of the user, including Myopia, Astigmatism, Hyperopia and/or Presbyopia. The eye information generally refers to eye prescription for the conditions of Myopia, Astigmatism, Hyperopia and/or Presbyopia. In this example, the user is assumed to have myopia and/or astigmatism and the eye information includes eye refractive index of each eye of the user.

In a step 606, the graphical user interface prompts the user to manually input left eye refractive index.

In a step 608, the graphical user interface prompts the user to manually input right eye refractive index.

The manual input is accomplished by the user entering the numerical value of the respective eye refractive indices in the graphical user interface. In the present example, the numerical values are entered via a touch screen keypad. The numerical values range between -1 .00 dioptres (D) to -6.00 dioptres (D) for Myopia. If the user has myopic astigmatism, additional 2 sets of numerical inputs are required, a first numerical input for cylinder lens power required to correct difference between powers of the two principal meridians of each eye and a second numerical input for axis of astigmatism. For example, an eyeglass prescription for correction of myopic astigmatism could be: -3.50 D -1 .00 D x 90. The first number, -3.50, is sphere power in dioptres for correction of myopia in flatter (less nearsighted) principal meridian of the respective eye. The second number, -1 .00, is cylinder power for additional myopia correction required for more curved principal meridian. In this case, the total correction required for this meridian is -4.50 D, which is calculated by: (-3.50) + (-1 .00) = -4.50 D. The third number, 90, is called the axis of astigmatism. This is a location (in degrees) of the flatter principal meridian, on a 180-degree rotary scale where 90 degrees designates vertical meridian of the respective eye, and 1 80 degrees designates horizontal meridian of the respective eye.

In the present example, an alternative method involving eye image capture is available upon user request to obtain the eye refractor index of each eye of the user and this is done via a step 610. That is, at step 610, instead of manually inputting the values of the respective eye refractor index, the user may select that one or more eye images are to be captured using a camera connected to the input interface (514 of Fig. 5) of the apparatus (500 in Fig. 5). The one or more eye images may include a close up eye retina snapshot. The eye retina snapshot is then processed using a suitable algorithm to determine the eye refractor index values for myopia and/or astigmatism. In another example, three photographs may be taken using the camera for each eye in succession from three frontal angles. That is, the three photographs are taken from left side of the user's head facing the left eye, front side facing the eye of the user and right side of the user's head facing the right eye. The three photographs are then processed via a suitable algorithm to determine the eye refractor index values for myopia and/or astigmatism.

In a step 612, the user selects to correct one or more images for myopia, astigmatism, hyperopia and/or presbyopia. At step 612, using the information on eye refractive index from the manual input for left eye refractive index input and right eye refractive index, or using the information on eye refractive index obtained through eye image capture in step 610, an eye wavefront analyzer (e.g. 524 in Fig. 5) computes average eye refractive index for both eyes in the present example if myopia and/or astigmatism is applicable to the user. If myopia and/or astigmatism is not applicable to the user, step 612 does not have to be carried out. The average eye refractive index is then used in further processing to obtain an average point spread function (PSF). It is appreciated that this averaging to obtain the average eye refractive index may be performed when it is detected that the eye refractive index are different for both eyes. However, this averaging may also be performed as a default step even when both eyes are having the same or similar eye refractive index. Another alternative is that the averaging is not performed when it is detected that the eye refractive index of both eyes are the same and the same eye refractive index is used in further processing to obtain a point spread function.

Wavefront of light entering an eye can be visualized as a surface over which light has a constant phase. An ideal spherical wavefront will make light coming into the eye converge to a single point in the retina of the eye so that a clear and sharp image can be seen. In operation, the eye wavefront analyzer derives a wavefront aberration function from the average eye refractive index or eye refractive index (when the eye refractive index of both eyes are the same). The wavefront aberration function is a function representing optical deviations of a wavefront from an ideal spherical wavefront. Any wavefront aberration will cause deterioration in quality of point images in an eye, and also in quality of an actual image seen by the eye as a whole.

The wavefront aberration function can be represented by equation (1 ) as follows:

In equation (1 ), c/ ' represents Zernike coefficients and Zi (ρ,θ) represents corresponding Zernike polynomials. Zernike coefficients and Zernike polynomials are best used to represent the wavefront aberration function.

Human visual perception is created by projection of images of external objects on eye retina, which is a light sensitive portion of a human eye. Any object, when viewed by the eye, can be thought of as a two-dimensional array of point sources with variable intensity. In the context of human - computer or mobile digital display screen interaction, when a user views a display screen, each pixel of an image displayed on the display screen can be treated as a point source, and a corresponding image of that pixel is projected upon eye retina of the user. An ideal imaging system would establish an exact point-to-point mapping between an external object and an retinal image. However, all imaging systems involve more or less some aberration. Therefore, every point source is distributed to an extended area on the eye retina of the user with variable intensity (intensity distribution). This intensity distribution can be represented as the point spread function (PSF), which is analogous to a two-dimensional impulse response function of an imaging system. The PSF can be used to describe imaging quality of an optical system. It can be derived from the wavefront aberration function. Once the PSF is obtained, the amount of adjustment to be made to an image to correct it for myopia and/or astigmatism can be worked out.

The wavefront aberration function is obtained from the eye wavefront analyzer as a set of Zernike coefficients. The PSF can be determined from the wavefront aberration function. An optical transfer function (OTF) can be determined by applying Fourier Transform to the PSF. The OTF can be used to determine an uncompensated image function. Inverse Fourier Transform can be performed on the uncompensated image function to obtain a compensated image function, which can be further processed to obtain a compensated image.

Specifically, without image compensation, the apparatus (500 in Fig. 5) will output images for display that will appear distorted or blurred to eyes of the user with myopia and/or astigmatism. The extent of distortion or blur can be defined by the PSF determined for the user's eyes. For example, an image represented by a function, 0(x,y), is degraded by the PSF of the user's eyes, which is represented by a function PSF(x,y). A resultant distorted image, l(x, y) will be viewed on the user's eye retina due to the user's myopia and/or astigmatism. This example can be expressed mathematically as a convolution process represented in frequency domain by equation (2) as follows:

I (x,y) = 0 ,y) * PSF(x,y)— - (2) The OTF can be determined by applying Fourier Transform to the PSF as represented in frequency domain by equation (3) as follows:

OTFifxJy) = F{PSF(x,y)}— - (3) Therefore, mathematically, an uncompensated image function, RD(fx, fy), can be represented in frequency domain by equation (4) as follows:

RDifxJy) =

OTFifxJy) (4)

Accordingly, in order to counteract or remove the distortion introduced by the PSF, an actual deconvoluted image or, in other words, the compensated image can be represented in frequency domain by a compensated image function, RD(x,y) (see equation (5) below), through inverse Fourier Transform :

An objective of the above deconvolution is to develop a compensated image corrected for myopia and/or astigmatism to be displayed on a device display screen. The compensated display image should be such that, when naturally convolved with the PSF of the user's eyes, it will yield a retinal image that is as close as possible to the clear image to be viewed. It is noteworthy to mention that the above method to develop a compensated image corrected for myopia and/or astigmatism.

The deconvolution method described above advantageously improves graphical user interface interactions of computer users with visual impairments. This method is based on an assumption that wavefront aberration of the user's eye is considered with a constant pupil size. If the user's pupil size at a time of viewing images is the same as the pupil size during measurement by the eye wavefront analyzer, the assumption is justified. However, under ordinary circumstances, the pupil size of a user may change due to variations in ambient lighting conditions or even due to emotional factors (e.g. fatigue). Hence, when Zernike coefficients are determined from obtained information of eye refractive index of the user, a pupil diameter defining a circular area in which Zernike functions are defined must be specified. Therefore, different pupil sizes will obtain different PSF. In order to avoid possible pupil size mismatch between the PSF derived for a user and those at a time of viewing a compensated image, a matrix method called Campbell's method can be used to adjust Zernike coefficients of the wavefront aberration function to take into consideration viewing pupil size.

Campbell's method states that same area of a surface will be described by different sets of Zernike coefficients if a different aperture radius is used to find the Zernike coefficients. With the information on eye refractive index of the user represented in the form of a set of Zernike coefficients related to a given aperture radius, Campbell's method provides a conversion matrix [C] that will properly convert vectors of one Zernike coefficient set |c) corresponding to an original aperture radius to vectors of another Zernike coefficient set \c ) corresponding to a new aperture radius. The conversion matrix [C] can be represented by equation (6) as follows:

W > = [C] |c> -- (6)

The conversion matrix [C] can also be derived as equation (7) as follows: [C] = [P] T [N]- 1 [R]- 1 [ ] [R] [N] [P] -

In equation (7), the "T" and superscripts mean matrix transposition and inversion, respectively. Furthermore, [P] represents permutation matrix, [N\ indicates normalization matrix, [R\ indicates weighting coefficient matrix and [η] indicates powers of ratio matrix. Among these matrices, only [η] is related to the new aperture radius.

The compensated image function, RD(x,y), represented by equation (5), can be further processed for displaying a compensated image on a display connectable to an output interface (e.g. 516 in Fig. 5) of an apparatus (e.g. 500 in Fig. 5) of an example of the present disclosure. Further processing includes making display adjustments to generate the compensated image. The display adjustments to be made can be determined through, for instance, trial and error, during development of one or more applications for the correction of one or more images for myopia and/or astigmatism. A relationship between the compensated image function and the amount and type of display adjustment to make for the compensated image can be determined during development of the one or more applications. Once such relationship is established, the one or more applications can be programmed accordingly to provide the predetermined amount and type of display adjustment to make for each compensated image function generated. The display adjustments for the compensated image may include similar steps taken by an autorefractor or automated refractor used during an eye examination to provide a corrected image for objective measurement of the user's eye refractive index for prescription of spectacles or contact lenses.

After the eye wavefront analyzer obtains the PSF for a user in step 612, the computational imaging component carries out the steps as described above to obtain the compensated image function, RD(x,y), represented by equation (5) in a step 614. In step 614, the computational imaging component manipulates any uncompensated image inputted for processing based on the compensated image function obtained for myopia and/or astigmatism correction if myopia and/or astigmatism is applicable to the user. Image manipulation for myopia and/or astigmatism includes adjustment of contrast of grey tone or light colored image content, adjustment of contrast of background colors, brightness adjustment, and the like, and any combination thereof. The computational imaging component also manipulates any uncompensated image inputted for processing for hyperopia and/or presbyopia correction if hyperopia and/or presbyopia is applicable to the user. Image manipulation for hyperopia and/or presbyopia includes enhancing certain alphanumeric text by automatic text enlargement, magnifying an image or specific content of an image, and the like.

After image manipulation in step 614, the compensated image or images corrected for myopia, astigmatism, hyperopia and/or presbyopia, whichever applicable to the user, is displayed in a step 616. In the present example, the one or more applications are configurable to adjust system settings such that all images to be displayed on a display connected to the output interface (516 of Fig. 5) are corrected for myopia, astigmatism, hyperopia and/or presbyopia. If all images to be displayed are selected to be corrected, all images to be displayed on the display would be input to the one or more applications for correction once the user selects to correct one or more images for myopia, astigmatism, hyperopia and/or presbyopia at step 612. Compensated images will then be displayed in place of all the images. It is of course possible that the user only selects specific images and not all images to be corrected. For instance, a user may select that only all images containing alphanumeric characters or text are corrected and all purely picture images need not be corrected. A user may also select a specific image to be corrected or a video or part of a video, which is essentially made up of a plurality of images. If all images are selected for correction, the graphical user interface of the apparatus (500 in Fig. 5) would be adjusted accordingly to display the corresponding compensated images that have been corrected for myopia, astigmatism, hyperopia and/or presbyopia, whichever applicable to the user. In the case of all images to be displayed are corrected, if the adjusted graphical user interface is a clear image to the user, the user can upon confirmation save the settings for the correction of images for the user's visual impairment into the system memory (504 in Fig. 5) in a step 618. If any compensated image appears unclear, the one or more applications is configured to enable the user to make personal changes to the unclear images according to the user preference prior to saving the settings into the system memory (504 in Fig. 5). The user may alter the clarity of the images displayed by making increment or decrement interval changes of predetermined degree. For instance, increment or decrement by +/- 0.10D and/or +/-1 degree axis of astigmatism and/or in dioptres. Upon completion of the personal changes made by the user, the user may select to save the personalized settings into the system memory (504 in Fig. 5) in the step 618.

In another example, instead of using the eye wavefront analyzer (524 in Fig. 5) and the computational image component (526 in Fig. 5) to generate compensated images after the apparatus (500 in Fig. 5) obtains the user eye refractive index, uncompensated images inputted for vision correction are corrected based on predetermined settings established based on trial and error. Such predetermined settings can, for instance, be obtained during development of the one or more applications by engaging the help of a sample population consisting of users with a wide spectrum of different degrees of conditions of myopia, astigmatism, hyperopia and/or presbyopia. The users of the sample population can be asked to adjust display settings until a clear image is seen by them without visual aid. A relationship can be established between how the display settings are adjusted by the users of the sample population. From the data collected for the sample population, it should be possible to estimate what display settings are preferred for all users having any condition of myopia, astigmatism, hyperopia and/or presbyopia.

FIG. 7 is a flowchart 700 illustrating another image correction process to generate one or more images corrected for myopia, astigmatism, hyperopia and/or presbyopia for displaying on a display connected to the output interface 516 of the apparatus 500 in Fig. 5. This image correction process is executed by one or more applications stored in the system memory (504 in Fig. 5) of the apparatus 500. In the present example, it is assumed that a user has selected through the one or more applications to have all images to be displayed on the display corrected for myopia, astigmatism, hyperopia and/or presbyopia, whichever applicable to the user.

In a step 702, a display user interface (also known as graphical user interface) shown on the display is activated by the user for entering of eye information of the user, which is indicative of degree of visual impairment of the user. In the present example, the user is assumed to have astigmatism, and hyperopia and/or presbyopia. The display user interface prompts the user to input eye information.

In a step 704, the user enters a magnification multiplier to the display user interface, which is essentially the same as eyeglass prescription for correction of hyperopic astigmatism. An example could look like this: +2.00 D -1 .00 D x 180, where +2.00 D is determined as a 2x magnification multiplier for hyperopia and/or presbyopia. The unit "D" is in dioptres. The typical range of hyperopia and/or age related presbyopia ranges from +1 .00 D or less to +3.00D. Although there are cases where the hyperopia and/or age related presbyopia has exceeded +5.00D, these are abnormalities rather than the norm. For the purpose of this example, the range of dioptres reading is kept to within +3.00D, but other examples are not limited to this.

In the present example, the user has astigmatism as well and is assumed to have already entered and processed his or her eye information for correction of images for myopia and/or astigmatism through the relevant steps illustrated in Fig. 6. Therefore, after the entry of the eye information by the user in the step 704, the one or more applications access the system memory (504 in Fig. 5) to retrieve earlier settings for correction of images for myopia and/or astigmatism in a step 706. The earlier settings include at least the PSF derived for the eyes of the user. Of course, there will not be earlier settings to be retrieved if the user does not have myopia and/or astigmatism.

Step 708 in the flowchart 700 is a step where the eye wavefront analyzer (524 of Fig. 5) of the apparatus (500 of Fig. 5) determines a wavefront aberration function from previously obtained eye information of the user to determine a point spread function (PSF), which can be an average PSF, for the eyes of the user. As described with reference to Fig. 6, the PSF is used in the generation of one or more images corrected for myopia and/or astigmatism. Step 708 may be carried out after retrieving the earlier settings if the earlier settings include only the eye refractive index of the eyes of the user for myopia and/or astigmatism. Step 708 need not be carried out in the flowchart 700 of Fig. 7 if the PSF has previously been determined and stored in, for instance, the system memory (504 in Fig. 5). However, if the PSF is not previously stored in the system memory (504 in Fig. 5), step 708 can be carried out to determine the PSF.

In a step 710, a first image to be displayed after the entry of the eye information in step 704 is corrected by the computational imaging component (526 in Fig. 5) for myopia and/or astigmatism based on the retrieved earlier settings and, in this case, according to the image compensation technique described with reference to Fig. 6. The first image in this case is an image of the display user interface. A corrected first image is generated after correction for myopia and/or astigmatism. The generated corrected first image is further processed by the computational imaging component using the magnification multiplier entered by the user in step 704 to enlarge the corrected first image. The magnification helps to compensate the corrected first image for hyperopia and/or presbyopia. As the user has selected that all images to be displayed on the display are corrected for myopia, astigmatism, hyperopia and/or presbyopia, all images to be displayed starting from the first image will go through the same correction as the first image for myopia and/or astigmatism and the same magnification as the first image for hyperopia and/or presbyopia. A resultant effect is that there would be "zooming" of all images corrected for myopia and/or astigmatism displayed on the display as correction for hyperopia and/or presbyopia. All fully compensated images for myopia and/or astigmatism, and hyperopia and/or presbyopia are then displayed accordingly on the display in a step 712. In the present example, subsequent fully compensated images are images of the display user interface until the user switches to display images other than those of the display user interface. These other images are also subject to the same correction and magnification as the first image before being displayed.

In another example, the magnification of images or "zooming" effect described above can be activated by voice command without need for user activation via hands-on interaction with the apparatus (500 in Fig. 5) in a step 714. The voice command will be captured by a microphone connected to the input interface (e.g. 514 of Fig. 5) of the apparatus (500 in Fig. 5). In this case, it is optional to have the step 704 for the user to enter the magnification multiplier. Even if the magnification multiplier is entered and all fully compensated images are displayed with magnification, the voice command function can also be used to change the magnification. For example, if certain alphanumeric text or images are too small for distant reading after magnification or without magnification, the user can control the apparatus (500 in Fig. 5)via voice command to instruct magnification by predetermined number of times without hands-on interaction with the apparatus (500 in Fig. 5). For instance, when the one or more applications are running, a user may say "zoom in two times" to magnify displaying of the image by 2 times or "zoom out two times" to reduce displaying of the image by 2 times.

In a step 716, the settings for magnification to correct hyperopia and/or presbyopia obtained from steps 704 and 710 can be saved in the system memory (504 in Fig. 5) for future use.

FIG. 8 is a flowchart 800 illustrating an example of a computational imaging process to enable one or more images corrected for myopia, astigmatism, hyperopia and/or presbyopia to be displayed on a display connected to the output interface 516 of the apparatus 500 in Fig. 5 when an operating system of the apparatus (500 in Fig. 5) is subject to system reboot. The computational imaging process is executed by one or more applications stored in the system memory (504 in Fig. 5) of the apparatus 500. A user is assumed to have already obtained the correction settings required for correction of images for myopia and/or astigmatism from the steps of Fig. 6 and the magnification settings required for correction of images for hyperopia and/or presbyopia from the steps of Fig. 7. The correction settings and magnification settings are stored in the system memory (504 in Fig. 5). While it has been disclosed that upon user selection, all images can be automatically corrected for myopia, astigmatism, hyperopia and/or presbyopia at the end of the steps in Fig. 6 and Fig. 7, a system reboot may be required before all images to be displayed on the display can be corrected for myopia, astigmatism, hyperopia and/or presbyopia. The steps as follows also sets out what could possibly happen after a system reboot. After system reboot at a step 802, the system memory (504 in Fig. 5) retrieves in a step 804 a compensated image of a home screen of the apparatus (500 in Fig. 5) that is cached in the system memory (504 in Fig. 5). The compensated image of the home screen of the apparatus (500 in Fig. 5) is displayed on the display. The compensated image of the home screen may be generated previously upon the user confirming the correction settings and magnification settings when these settings were generated. The one or more applications allow the user to determine which other applications running on the apparatus require all images of graphical user interfaces of these other applications to be corrected for myopia, astigmatism, hyperopia and/or presbyopia prior to display. The one or more applications also allow the user to select these other applications to be recorded in the system memory (504 in Fig. 5) as requiring all images of the graphical user interfaces of these other applications to be corrected for myopia, astigmatism, hyperopia and/or presbyopia.

Step 806 in the flowchart 800 is a step where the eye wavefront analyzer (524 of Fig. 5) of the apparatus (500 of Fig. 5) determines a wavefront aberration function from previously obtained eye information of the user to determine a point spread function (PSF), which can be an average PSF, for the eyes of the user. As described with reference to Fig. 6, the PSF is used in the generation of one or more images corrected for myopia and/or astigmatism. Step 806 may be carried out after retrieving the earlier settings if the earlier settings include only the eye refractive index of the eyes of the user for myopia and/or astigmatism. Step 806 need not be carried out in the flowchart 800 of Fig. 8 if the PSF has previously been determined and stored in, for instance, the system memory (504 in Fig. 5). However, if the PSF is not previously stored in the system memory (504 in Fig. 5), step 806 can be carried out to determine the PSF.

In a step 808, the computational imaging component further processes the obtained PSF to get compensated images according to the relevant steps described with reference to Fig. 6. The compensated images are images to be displayed individually on the display after the compensated image of the home screen is displayed. Specifically, the computational imaging component corrects for myopia and/or astigmatism any images to be displayed that are not cached like the home screen of the apparatus (500 in Fig. 5).

In a step 810, compensated images displayed are adjusted by the user upon request according to preference of the user. The adjustments can be made as described previously by making increment or decrement interval changes of predetermined degree with respect to axis of astigmatism and/or in dioptres.

Steps 806, 808 and 810, known collectively as step 812, address correction of myopia and/or astigmatism.

If hyperopia and/or presbyopia are applicable to the user, in a step 814, image magnification is performed on each of the images corrected for myopia and/or astigmatism based on a magnification multiplier previously entered by the user before the final compensated image corrected for myopia and/or astigmatism and magnified for hyperopia and/or presbyopia is displayed.

Steps 806, 808, 810 and 814 known collectively as step 816 address correction of myopia, astigmatism, hyperopia and/or presbyopia and any combination thereof that is applicable to the user. The final compensated images are displayed as graphical user interface on the display in a step 818. The user viewing the compensated graphical user interface does not need visual aid as the compensated images of the graphical user interface would appear clear to the user on the retina of the user's eyes. It is appreciated that the one or more images of the graphical user interface to be corrected prior to display can include any image of an application operated on the apparatus (500 in Fig. 5), for instance, an Internet browser window containing contents of a webpage, any image in a video, any image of a game being played, any image showing one or more icons of applications downloaded and installed on the apparatus (500 in Fig. 5) and the like.

FIG. 9 is a flow chart 900 illustrating a visual aid detecting process that can be implemented on the apparatus 500 in Fig. 5. Similarly, one or more images corrected for myopia, astigmatism, hyperopia and/or presbyopia can be displayed on a display connected to the output interface 516 of the apparatus 500 in Fig. 5. This visual aid detecting process is executed by one or more applications stored in the system memory (504 in Fig. 5) of the apparatus 500.

In a step 902, while a display graphical user interface is displayed on a display connected to the output interface (516 in Fig. 5) of the apparatus 500 in Fig. 5, the one or more applications automatically activate a built-in retina sensor or camera connected to the input interface (514 in Fig. 5) of the apparatus 500 in Fig. 5 to continuously or periodically determine whether the user is wearing visual aid such as spectacles or contact lenses. The display graphical user interface may or may not be displayed by images corrected for myopia, astigmatism, hyperopia and/or presbyopia.

The built-in retina sensor or camera captures one or more images of the user's face at a step 904 and image processing is performed to process the one or more images to determine whether the user is wearing visual aid. For example, a user wearing spectacles would have spectacle "rings" surrounding each eye or nose bridge between the eyes. Image processing can be performed to detect such rings and/or nose bridge. A user wearing contact lenses will have a ring appearing in sclera region (i.e. white region of an eye). Image processing can be performed to detect the ring resulting from wearing contact lenses.

If it is detected that the user is wearing a visual aid like spectacles or contact lenses in a step 906, the apparatus is configured to retrieve from, for instance, the system memory (504 in Fig. 5) default factory settings for images displayed on the display in a step 908. Essentially, the default factory settings used for the display refer to standard display format of images without any correction for myopia, astigmatism, hyperopia and/or presbyopia. Hence, if the display graphical user interface is already displaying images corrected for myopia, astigmatism, hyperopia and/or presbyopia, the default factory settings are applied and correction of the images for displaying is stopped in a step 912. If the display graphical user interface is already not displaying images corrected for myopia, astigmatism, hyperopia and/or presbyopia and is already displaying according to the default factory settings, nothing needs to be done at the step 912.

If it is detected that the user is not wearing a visual aid like spectacles or contact lenses in a step 914, the apparatus is configured to retrieve from, for instance, the system memory (504 in Fig. 5) display settings to correct images to be displayed for myopia, astigmatism, hyperopia and/or presbyopia, whichever applicable to the user, in a step 916. Settings to correct images for myopia and/or astigmatism are retrieved at a step 918 and settings to correct images for hyperopia and/or presbyopia are retrieved at a step 920, whichever applicable to the user. If the display graphical user interface is not already displaying images corrected for myopia, astigmatism, hyperopia and/or presbyopia, computational imaging processing for myopia and/or astigmatism is carried out in a step 922 if the user has myopia and/or astigmatism, and image magnification processing for hyperopia and/or presbyopia is carried out in a step 924 if the user has hyperopia and/or presbyopia so that the display graphical user interface is displayed with images corrected for myopia, astigmatism, hyperopia and/or presbyopia, whichever applicable to the user. The methods for correcting images for myopia, astigmatism, hyperopia and/or presbyopia in steps 922 and 924 are similar to what have been taught by the description with reference to Fig. 6 and Fig. 7. If the display graphical user interface is already displaying images corrected for myopia, astigmatism, hyperopia and/or presbyopia, computational imaging processing for myopia and/or astigmatism at step 922 and image magnification processing for hyperopia and/or presbyopia at step 924 do not have to be carried out. Depending on what images the display graphical user interface is already displaying and whether the user is wearing visual aid, the display graphical user interface is adjusted accordingly in a step 926.

The advantages of the examples of the present disclosure may include, without limitations, a sharper and clearer display of user interface on mobile devices including mobile phones, tablets, notebooks, desktop computers and any consumer electronics, for instance, Virtual Reality goggles/eyewear, which require a display for human interface. This essentially removes need for visual aids, such as spectacles or contact lenses, thus advantageously reducing reliance on these visual aids. In addition, the advantages include preventing distractions/interruptions due to visual impairment caused by presbyopia, hyperopia, myopia and/or astigmatism, such as a need to put on or adjust spectacles while engaging in activities such as driving an automobile or operating a machine, by allowing for easy reading of content on a display of a mobile device while engaging in the activities.

Examples of the present disclosure may have the following features.

An apparatus (e.g. 500 of Fig. 5) for display adjustment, the apparatus comprising: a display output (e.g. 516 of Fig. 5); an input interface (e.g. 514 of Fig. 5); a processor (e.g. 502 of Fig. 5); and a memory (e.g. 504 of Fig. 5) for storing one or more applications (e.g. 524, 526 of Fig. 5), the processor being configured to execute the one or more applications to control the apparatus for obtaining eye information of a user with visual impairment including myopia, astigmatism, hyperopia, and/or presbyopia through the input interface; generating a compensated image corrected for astigmatism, hyperopia, and/or presbyopia, whichever applicable to the user, based on the obtained eye information; and outputting to the display output the compensated image for viewing by the user without visual aid on a display connected to the display output, wherein the one or more applications is configurable to operate in background to control the apparatus such that a plurality of selected images for outputting to the display output are generated according to the generation of the compensated image. The apparatus may be a semiconductor device or chipset residing in an electronic device.

The eye information may be obtained by the user inputting the eye information to a user input device connected to the input interface.

The apparatus may be controlled to operate as an eye wavefront analyzer (e.g. 524 of Fig. 5) for determining a wavefront aberration function for each eye of the user to obtain a point spread function for each eye based on the obtained eye information, wherein the eye information includes the eye refractive index of each eye of the user.

When the eye refractive index of each eye of the user are different or regardless of whether the eye refractive index of each eye of the user are different, the apparatus may be controlled for carrying out steps as follows:

averaging values of the obtained information of the respective eye refractive indices; determining an uncompensated image function from data of an uncompensated image;

- determining an average point spread function for eyes of the user based on the averaged values and the point spread function for each eye;

applying Fourier Transform on the average point spread function determined to obtain an optical transfer function;

determining a distorted image function based on the uncompensated image function and the optical transfer function;

applying inverse Fourier Transform on the distorted image function to obtain a compensated image function for correction of myopia and/or astigmatism ; and generating the compensated image corrected for myopia and/or astigmatism based on the compensated image function.

When the eye refractive index of each eye of the user are the same, the apparatus may be controlled for carrying out steps as follows:

determining an uncompensated image function from data of an uncompensated image;

- applying Fourier Transform on the point spread function common for each eye of the user to obtain an optical transfer function;

determining a distorted image function based on the uncompensated image function and the optical transfer function;

applying inverse Fourier Transform on the distorted image function to obtain a compensated image function for correction of myopia and/or astigmatism ; and generating the compensated image corrected for myopia and/or astigmatism based on the compensated image function.

Campbell's method may be used to adjust Zernike coefficients of the wavefront aberration function to take into consideration viewing pupil size of an eye. When hyperopia or presbyopia is applicable to the user, generating the compensated image may include magnifying content in the compensated image based on a multiplier derived from the obtained eye information.

The plurality of selected images may be images in a video file.

The apparatus may be connectable to a microphone and the one or more applications may be configurable to control the apparatus to select the plurality of selected images by voice command through the microphone.

The apparatus may be connectable to a camera and the one or more applications may be configurable to control the apparatus to detect use of visual aid on the user through capturing and analyzing an image of the user and stop the generation of the plurality of selected images upon detection of the use of visual aid.

The apparatus may be connectable to a camera and the one or more applications may be configurable to control the apparatus to obtain the eye refractive index of the user as the eye information by the user capturing images of each eye of the user from directions including left side of the user's head facing the eyes of the user, front side of the user's head facing the eyes of the user and right side of the user's head facing the eyes of the user.

The one or more applications may be configured to control the apparatus to obtain input through the input interface to adjust clarity of the compensated image upon request by the user.

The input obtained through the input interface may include increment or decrement interval of a predetermined degree with respect to axis of astigmatism and/or in dioptres.

Generating the compensated image may include a step of adjusting contrast of grey tone, a step of adjusting contrast of background color, or a step of adjusting brightness, or any combination of said steps.

The plurality of selected images may include all images to be outputted to the display output. In the case that not all images are outputted, images selected by a user may be outputted, for instance, a user selects that only all images with text in them are to be corrected for user viewing and outputted for displaying, and images containing photographs, animations, and drawings are to be excluded.

A mobile device for display adjustment may be provided. The mobile device comprising: the display; the apparatus having one or more of the features indicated above; and a user input device connectable to the input interface.

A method for display adjustment may be provided. The method may comprise

obtaining eye information of a user with visual impairment including myopia, astigmatism, hyperopia, and/or presbyopia;

generating a compensated image corrected for astigmatism, hyperopia, and/or presbyopia, whichever applicable to the user, based on the obtained eye information; displaying the compensated image for viewing by the user without visual aid on a display; and

- operating in background one or more applications to control an apparatus such that a plurality of selected images for displaying on the display are generated according to the generation of the compensated image.

While the foregoing written description enables a person skilled in the art to make and use what is considered presently to be the best mode thereof, the person skilled in the art will understand and appreciate the existence of variations, combinations, and equivalents of the specific embodiment, method, and examples herein. The invention should therefore not be limited by the above described embodiment, method, and examples, but by all embodiments and methods within the scope of the patent claims defining the invention.




 
Previous Patent: COLLAPSIBLE BAG FOR CAMERA OR THE LIKE

Next Patent: EXOSUIT