Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
MYOPIA-SAFE VIDEO DISPLAYS
Document Type and Number:
WIPO Patent Application WO/2021/087398
Kind Code:
A1
Abstract:
Video signal creation includes receiving an input video signal having RGB (red, green, blue) components, determining that the blue component is greater the green component by some threshold value and a differential between the blue and green components, and sending an output video signal having RGB components, where at least one of the following is true: (i) the output B component is decreased by a fractional amount relative to the input B component, respectively, based on the differential; and/or (ii) the output R component is increased by a fractional amount relative to the input R component based on the differential, and the output G component is increased by a fractional amount relative to the input B component based on the differential.

Inventors:
OLSEN DAVID WILLIAM (US)
Application Number:
PCT/US2020/058410
Publication Date:
May 06, 2021
Filing Date:
October 30, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
VISU INC (US)
International Classes:
H04N9/68; G06T1/00; G06T3/40; G06T5/00; H04N9/31; H04N9/73; H04N9/76
Domestic Patent References:
WO2017051768A12017-03-30
WO2012145672A12012-10-26
Foreign References:
US9773473B22017-09-26
US9046752B22015-06-02
Attorney, Agent or Firm:
BOWLEY, Chris C. (US)
Download PDF:
Claims:
WHAT IS CLAIMED IS:

1. A method for creating video signals comprising:

(a) receiving an input video signal comprising an input red component, an input green component, and an input blue component;

(b) generating an output video signal comprising an output red component, an output green component, and an output blue component based on a magnitude of the input red component, a magnitude of the input green component, and a magnitude of the input blue component; and

(c) sending the output video signal, wherein: a differential between a magnitude of the output blue component and a magnitude of the output green component is reduced compared to a differential between the magnitude of the input blue component and the magnitude of the input green component when the magnitude of the input blue component is greater than the magnitude of the input green component, and the differential between the magnitude of the output blue component and the output green component is unchanged when the magnitude of the input blue component is not greater than the magnitude of the input green component.

2. The method of claim 1, wherein the magnitude of the output green component is increased by a fractional amount relative to the magnitude of the input green component based on the differential between the magnitudes of the input blue and the input green components, and a magnitude of the output red component is increased by a fractional amount relative to a magnitude of the input red component based on the differential between the magnitudes of the input blue and the input green components.

3. The method of claim 2, wherein the magnitude of the output green component and the magnitude of the output red component are increased by the same fractional amount.

4. The method of claim 2, wherein the magnitude of the output green component and the magnitude of the output red component are increased by different, respective, fractional amounts.

5. The method of claim 1, wherein the magnitude of the output blue component is the same as the magnitude of the input blue component.

6. The method of claim 1, wherein the magnitude of the output blue component is decreased by a fractional amount relative to the magnitude of the input blue component based on the differential.

7. The method of claim 1, the method further comprising: before step (c), determining that the differential between the magnitude of the input blue component and the input green component is greater than a predetermined threshold.

8. The method of claim 1, wherein generating the output video signal comprises comparing a magnitude of the input blue component to a magnitude of the input green component.

9. The method of claim 1, wherein sending the output video signal comprises reducing a magnitude of the output blue component relative to the magnitude of the input blue component in order to reduce the differential between the magnitude of the output blue component and the output green component when the magnitude of the input blue component is greater than the magnitude of the input green component.

10. The method of claim 9, wherein the magnitude of the output blue component is reduced by an amount proportional to the differential between the magnitude of the input blue component and the magnitude of the input green component.

11. The method of claim 10, wherein the magnitude of the output blue component is reduced by up to 45% of the differential between the magnitude of the input blue component and the magnitude of the input green component.

12. The method of claim 10, wherein the magnitude of the output blue component is reduced by 46% or more of the differential between the magnitude of the input blue component and the magnitude of the input green component.

13. The method of claim 1, wherein sending the output video signal comprises increasing a magnitude of the output green component relative to the magnitude of the input green component in order to reduce the differential between the magnitude of the output blue component and the output green component when the magnitude of the input blue component is greater than the magnitude of the input green component.

14. The method of claim 13, wherein the magnitude of the output green component is increased by an amount proportional to the differential between the magnitude of the input blue component and the magnitude of the input green component.

15. The method of claim 14, wherein the magnitude of the output green component is increased by up to 45% of the differential between the magnitude of the input blue component and the magnitude of the input green component.

16. The method of claim 14, wherein the magnitude of the output green component is increased by 46% or more of the differential between the magnitude of the input blue component and the magnitude of the input green component.

17. The method of claim 1, wherein sending the output video signal comprises increasing a magnitude of the output red component relative to the magnitude of the input red component when the magnitude of the input blue component is greater than the magnitude of the input green component.

18. The method of claim 1, wherein the magnitudes of the output blue component and the output green component are unchanged relative to the magnitudes of the input blue component and input green component, respectively, when the magnitude of the input blue component is not greater than the magnitude of the input green component.

19. The method of claim 18, wherein a magnitude of the output red component is unchanged relative to a magnitude of the input red component when the magnitude of the input blue component is not greater than the magnitude of the input green component.

20. The method of claim 1, further comprising using a display unit to display images based on the output video signals.

21. The method of claim 20, wherein the display unit comprises a display selected from the group consisting of a cathode ray tube display, a liquid crystal display, an organic light emitting diode display, a plasma display, a digital light processing display, or virtual reality (VR) headset.

22. The method of claim 1, wherein, when displaying a video image based on the output video signals, the video image has a reduced activity differential between L and M cones in a viewer compared to the video image based on the input video signals.

23. The method of claim 22, wherein the output video signals sufficiently reduce the activity differential between L and M cones in the viewer to reduce myopia-genic properties of a display compared with the input video signals.

24. The method of claim 1, wherein, when displaying a video image based on the output video signals, the video image has a reduced blue saturation for every video image compared to the video image based on the input video signals.

25. The method of claim 1, wherein, the output video signal is generated such that, for every pixel, for any combination of the input red, green, and blue components, the following relationship is true:

Bo-Go < Bi-Gi, wherein:

Bo is the magnitude of the output blue component;

Go is the magnitude of the output green component;

Bi is the magnitude of the input blue component; and

Gi is the magnitude of the input green component.

26. A non-transitory computer-readable medium having program instructions stored thereon that are executable by at least one processor, the program instructions comprising:

(a) instructions for receiving an input video signal comprising an input red component, an input-green component, and an input-blue component;

(b) instructions for generating an output video signal comprising an output red component, an output green component, and an output blue component based on a magnitude of the input red component, a magnitude of the input green component, and a magnitude of the input blue component; and

(c) instructions for sending the output video signal, wherein: a differential between a magnitude of the output blue component and a magnitude of the output green component is reduced compared to a differential between the magnitude of input blue component and the magnitude of the input green component when the magnitude of the input blue component is greater than the magnitude of the input green component, and the differential between the magnitude of the output blue component and the output green component is unchanged when the magnitude of the input blue component is not greater than the magnitude of the input green component.

27. The non-transitory computer-readable medium of claim 26, wherein the magnitude of the output green component is increased by a fractional amount relative to the magnitude of the input green component based on the differential between the magnitudes of the input blue and the input green components, and a magnitude of the output red component is increased by a fractional amount relative to the magnitude of the input red component based on the differential between the magnitudes of the input blue and the input green components.

28. The non-transitory computer-readable medium of claim 27, wherein the magnitude of the output green component and the magnitude of the output red component are increased by the same fractional amount.

29. The non-transitory computer-readable medium of claim 27, wherein the magnitude of the output green component and the magnitude of the output red component are increased by different, respective, fractional amounts.

30. The non-transitory computer-readable medium of claim 26, wherein, when displaying a video image based on the output video signals, the video image has a reduced activity differential between L and M cones in a viewer compared to the video image based on the input video signals.

31. The non-transitory computer-readable medium of claim 30, wherein the output video signals sufficiently reduce the activity differential between L and M cones in the viewer to reduce myopia-genic properties of a display compared with the input video signals.

32. The non-transitory computer-readable medium of claim 26, wherein, when displaying a video image based on the output video signals, the video image has a reduced blue saturation for every video image compared to the video image based on the input video signals.

33. The non-transitory computer-readable medium of claim 26, wherein the output video signal is generated such that, for every pixel, for any combination of the input red, green, and blue components, the following relationship is true:

Bo-Go < Bi-Gi, wherein

Bo is the magnitude of the output blue component;

Go is the magnitude of the output green component; Bi is the magnitude of the input blue component; and

Gi is the magnitude of the input green component.

34. A circuit for creating video signals, comprising: a comparator configured to (i) receive a blue component input signal and a green component input signal and (ii) output a blue-green differential signal comprising a difference of the blue component input signal and the green component input signal; a block configured to (i) receive the blue-green differential signal and (ii) output a blue-green scaled differential signal only when the blue component input signal is of a greater magnitude than the green component input signal; a first adder configured to (i) receive the green component input signal and the blue-green scaled-differential signal and (ii) output a green component output signal comprising a summation of the green component input signal and the blue-green scaled-differential signal; and a second adder configured to (i) receive a red component input signal and the blue-green scaled-differential signal and (ii) output a red component output signal comprising a summation of the red component input signal and the blue-green scaled differential signal.

35. The circuit of claim 34, wherein the blue-green differential signal received by the first adder and the second adder is a varied blue-green differential signal, the circuit further comprising: a resistive element configured to receive the blue-green differential signal and output the blue-green scaled differential signal.

36. The circuit of claim 35, wherein the resistive element is one of a static resistor, a variable resistor, a rheostat, a variable potentiometer, and a digitally programmable resistor.

37. A device for creating video signals, comprising: a comparator unit configured to: receive a blue component input signal and a green component input signal; compare a magnitude of the blue component input signal and a magnitude of the green component input signal; and output a blue-green differential signal only when the magnitude of the blue component input signal is greater than at least the magnitude of the green component input signal; and an adding unit configured to: receive (i) the blue-green differential signal, (ii) the green component input signal, and (iii) a red component input signal; output a green component output signal comprising a summation of the green component input signal and the blue-green differential signal; and output a red component output signal comprising a summation of the red component input signal and the blue-green differential signal.

38. The device of claim 37, wherein the comparator unit outputs the blue- green differential signal when the magnitude of the blue component input signal is greater than the sum of the magnitude of the green component input signal and a threshold value.

39. The device of claim 37, wherein the blue-green differential signal received by the adding unit is a varied blue-green differential signal, the device further comprising: a scaling unit configured to receive the blue-green differential signal and output the varied blue-green differential signal.

40. The device of claim 39, wherein the scaling unit comprises at least one of a static resistor, a variable resistor, a rheostat, a variable potentiometer, and a digitally programmable resistor.

41. The device of claim 37, further comprising: a display unit configured to: receive at least one of (i) the red component input signal, (ii) the blue component output signal, and (iii) the green component output signal; and display the received at least one of (i) the red component input signal,

(ii) the blue component output signal, and (iii) the green component output signal.

42. The device of claim 37, further comprising: a subtracting unit configured to: receive (i) the blue-green differential signal and (ii) the blue component input signal; and output a blue component output signal comprising a difference of the blue component input signal and the blue-green differential signal.

43. The device of claim 37, wherein, when displaying a video image based on the output signals, the video image has a reduced activity differential between L and M cones in a viewer compared to the video image based on the input signals.

44. The device of claim 43, wherein the output signals sufficiently reduce the activity differential between L and M cones in the viewer to reduce myopia-genic properties of a display compared with the input signals.

45. The device of claim 37, wherein, when displaying a video image based on the output signals, the video image has a reduced blue saturation for every video image compared to the video image based on the input signals.

46. The device of claim 37, wherein the output video signal is generated such that, for every pixel, for any combination of the input red, green, and blue components, the following relationship is true:

Bo-Go < Bi-Gi, wherein:

Bo is the magnitude of the output blue component;

Go is the magnitude of the output green component;

Bi is the magnitude of the input blue component; and Gi is the magnitude of the input green component.

Description:
Myopia- Safe Video Displays

CROSS-REFERENCE TO RELATED APPLICATION

[0001] This application claims priority to U.S. Provisional Application No. 62/928,270, filed October 30, 2019, the entirety of which is herein incorporated by reference.

TECHNICAL FIELD

[0002] The subject matter described herein relates to myopia-safe video displays.

BACKGROUND

[0003] Cone cells, or cones, are photoreceptor cells in the retina of the eye that are responsible for color vision. A human eye typically comprises three types of cones, each of which has a response curve (roughly a normal distribution) over a range of wavelengths of light and a peak sensitivity over a particular, smaller range of wavelengths of light. Long wavelength sensitive cones, also referred to as L cones (or red cones), respond most intensely to light having long wavelengths: the peak sensitivities of L cones are typically around wavelengths 564-580 nm (greenish- yellow light). Medium wavelength sensitive cones, also referred to as M cones (or green cones), respond most intensely to light having medium wavelengths: the peak sensitivities of M cones are typically around wavelengths 534-545 nm (green light). Short wavelength sensitive cones, also referred to as S cones (or blue cones), respond most to light having short wavelengths: the peak sensitivities of S cones are typically around wavelengths 420-440 nm (blue light).

[0004] Myopia is a refractive defect of the eye in which light entering the eye produces image focus in front of the retina, rather than on the retina itself.

Myopia is often colloquially referred to as nearsightedness. Myopia may be measured in diopters, which is a unit of measurement of the optical power of the eye's lens, equal to the reciprocal of the focal length of the lens.

[0005] Television, video games, and computer monitors all cause progression of myopia in children because those displays produce stimuli that cause l uneven excitation of the red and green cones. Differential activation of the red and green cones is responsible for the eye elongating abnormally during development, which in turn prevents images from being focused clearly on the retina.

[0006] Each display screen — whether for televisions, computers, tablets, mobile phones, virtual reality (VR) headsets, etc. — has unique spectral output characteristics. That is, the primary colors chosen by the respective display’s manufacturers to reproduce the intended image to the viewer each has its own unique spectra centered around a wavelength that the manufacturer has selected to represent that particular color. It is by combining these primary colors in varying proportions that the display screen is able to reproduces the intended colors of the image for the viewer. The full suite of colors available for a given screen is referred to as the gamut of the display. The gamut can be visualized easily using a two-dimensional diagram, like the CIE chromacity diagram shown in FIGURE 2, where, all the colors in the display’s gamut are constrained within the region whose vertices are anchored at the primary colors’ wavelengths. For three-primary color screens, which are the most common types of screens today, those colors form a triangle made up of red, green, and blue. Screens which use more than three primary color drivers enable a richer gamut of colors to be presented to the viewer because they cover more area in the xy- plane (per FIGURE 2) and thus carve out a larger gamut.

[0007] Some screens, like analog CRTs, tend to produce very strong red signals, which preferentially activate L cones six to eight times more effectively than M cones. The myopiagenic load of such video signals can be reduced by decreasing the magnitude of the red component in each pixel, thereby reducing adjacent LM cone contrast on the retina. The integrity of the picture can be maintained by correspondingly increasing the green and/or blue sub-pixel weight by some offset or offsets, with the goal of maintaining roughly equivalent overall weighting of the sum of the three original sub-pixels.

SUMMARY

[0008] Different screens exhibit their own spectral characteristics. For instance, in some virtual reality (VR) headsets, there is strong overlap of the blue primary emission spectra with the photoreceptor sensitivity profile of the human M cone, which means that blue sub-pixel emission from the headset drives the M cones.

Because of this, blue sub-pixel emissions are a driver of myopia in predominantly blue-green hued pixels, where the magnitude of the M cone stimulation from the incident light from this pixel onto the retina exceeds that of the adjacent L cones. Figure 3 illustrates this using measured data. Clearly, the blue primary drives the M- cone hard, almost as strongly as the red primary drives the L-cone, with a ratio of approximately 1.28-to-l.

[0009] Adj acent LM cone contrast on the retina of a viewer can be reduced by attenuating the magnitude of the blue sub-pixel emission in a pixel with a predominantly blue-to-green hue. This methodology can be applied in series with the red sub-pixel attenuation methodology, wherein the myopiagenic toxicity of the red sub-pixel emissions is reduced through a similar approach. Used in concert, these two related but distinct methodologies can act synergistically to produce an even safer screen-viewing experience. The order of the two effects may not be critical to operation of the combined system, as long as they are cascaded serially.

[0010] In a first aspect, the present disclosure describes methods for creating video signals including (a) receiving an input video signal including an input red component, an input green component, and an input blue component; (b) determining (i) that a magnitude of the input blue component is greater than a magnitude of the input green component and (ii) a differential between the magnitude of the input blue component and the magnitude of the input green component; and (c) sending an output video signal including an output red component, an output green component, and an output blue component, where at least one of the following is true: (i) the output blue component is decreased by a fractional amount relative to the input blue component based on the differential; and/or (ii) the output green component is increased by a fractional amount relative to the input green component based on the differential, and the output red component is increased by a fractional amount relative to the input red component based on the differential.

[0011] Similar to various myopia-reducing methodologies (see, e.g., U.S. Patent No. 9,955,133), which are intended to work in concert with the present methodology, the myopia-driving sub-pixel under consideration here (blue, rather than red) is compared against the green sub-pixel to calculate the differential quantity. That is, green is subtracted from blue. The reason for using green is related to the goal of at least some implementations described herein, which is to ameliorate the problem of too much adjacent LM cone contrast - in this case, too much M cone stimulation - due to the superposition of the M cone-stimulating wavelengths of the blue sub-pixel with the M cone stimulating wavelengths of the green sub-pixel. The goal of the methods and technology described in this disclosure is to reduce this superposition effect, with minimal aesthetic impact on the screen.

[0012] As noted above, the methodologies described in this disclosure can be used in concert with other methodologies (see, e.g., U.S. 9,955,133) to create a serial cascade of video processing effects. The steps can be performed in either order. Combining the techniques reduces myopia-genesis thanks to two complementary processes: (i) lowering of myopia-causing red emissions which overstimulate L cones (as described in the prior art) and (ii) lowering of myopia-causing green emissions which overstimulate M cones, all with minimal impact on image integrity.

[0013] In a second aspect, a non-transitory computer-readable medium is described that has program instructions stored thereon that are executable by at least one processor or GPU (graphics processing unit), the program instructions including instructions that cause the processor or GPU to perform steps of any embodiment or combination of embodiments of the methods of the first aspect. For instance, the program instructions may include: (a) instructions for receiving an input video signal including an input red component, an input-green component, and an input-blue component; (b) instructions for determining (i) that a magnitude of the input blue component is greater than a magnitude of the input green component and (ii) a differential between the magnitude of the input blue component and the magnitude of the input green component, respectively; and (c) instructions for sending an output video signal including an output red component, an output green component, and an output blue component, where at least one of the following is true: (i) the output blue component is decreased by a fractional amount relative to the input blue component based on the differential; and/or (ii) the output green component is increased by a fractional amount relative to the input green component based on the differential, and the output red component is increased by a fractional amount relative to the input blue component based on the differential.

[0014] In a third aspect, the circuits are provided for creating video signals, including: (a) a comparator configured to (i) receive a blue component input signal and a green component input signal and (ii) output a blue-green differential signal including a difference of the blue component input signal to the green component input signal; (b) an arithmetic block configured to (i) receive the blue- green differential signal and (ii) output a scaled blue-green differential signal when the blue component input signal is of a greater magnitude than the green component input signal; (c) a first adder configured to (i) receive the red component input signal and the scaled blue-green differential signal and (ii) output a red component output signal including a summation of the red component input signal and the scaled blue- green differential signal; and (d) a second adder configured to (i) receive a green component input signal and the scaled blue-green differential signal and (ii) output a green component output signal including a summation of the green component input signal and the scaled blue-green differential signal, and (e) a subtractor configured to (i) receive a blue component input signal and the scaled blue-green differential signal and (ii) output a blue component output signal including a subtraction of the blue component input signal and the scaled blue-green differential signal.

[0015] In a fourth aspect, devices are provided yfor creating video signals, including: (a) a comparator unit configured to receive a blue component input signal and a green component input signal; compare a magnitude of the blue component input signal and a magnitude of the green component input signal; and output a blue- green differential signal when the magnitude of the blue component input signal is greater than at least the magnitude of the green component input signal; and (b) an adding unit configured to receive (i) the blue-green differential signal, (ii) the green component input signal, the (iii) a red component input signal, and (iv) the blue component input signal; output a green component output signal including a summation of the green component input signal and a fraction of the blue- green differential signal; output a red component output signal including a summation of the red component input signal and a fraction of the blue-green differential signal; and output a blue component output signal including a subtraction of a fraction of the blue-green differential signal from the blue component input signal.

[0016] The details of one or more variations of the subject matter described herein are set forth in the accompanying drawings and the description below. Other features and advantages of the subject matter described herein will be apparent from the description and drawings, and from the claims. DESCRIPTION OF DRAWINGS

[0017] FIGURE la shows the simultaneous excitation of all three types of cones when exposed to red phosphor stimuli, in accordance with some implementations of the current subject matter.

[0018] FIGURE lb shows the simultaneous excitation of all three types of cones when exposed to green phosphor stimuli, in accordance with some implementations of the current subject matter.

[0019] FIGURE 2 shows an example RGB (three-color) display gamut within a CIE Chromacity diagram, in accordance with some implementations of the current subject matter.

[0020] FIGURE 3 shows Oculus Rift CV1 primary color spectra as measured by the Neitz lab superimposed onto comeal sensitivity profiles for the S, M, and L cones of a typical 18 year old, in accordance with some implementations of the current subject matter.

[0021] FIGURE 4 is a schematic of an example circuit, in accordance with some implementations of the current subject matter.

[0022] FIGURE 5 is a schematic of an example circuit, in accordance with some implementations of the current subject matter.

[0023] FIGURE 6 is a block diagram of an example computing device capable of implementing some implementations.

[0024] FIGURE 7 depicts an example computer-readable medium, in accordance with some implementations of the current subject matter.

[0025] FIGURE 8 is a flow chart illustrating an example method, in accordance with some implementations of the current subject matter.

[0026] FIGURE 9a shows a model unit cube representing a standard RGB display, in accordance with some implementations of the current subject matter.

[0027] FIGURE 9b shows a model unit cube representing an exemplary myopia-safe display for which the blue primaries would otherwise cause strong differential stimulation of adjacent LM cones, in accordance with some implementations of the current subject matter.

[0028] FIGURE 9c shows a model unit cube representing an exemplary myopia-safe display for which the red primaries would otherwise cause strong differential stimulation of adjacent LM cones, in accordance with some implementations of the current subject matter.

[0029] FIGURE 9d shows a model unit cube representing an exemplary myopia-safe display for which the red and blue primaries would otherwise cause strong differential stimulation of adjacent LM cones, in accordance with some implementations of the current subject matter.

[0030] Like reference symbols in the various drawings indicate like elements.

DETAILED DESCRIPTION

[0031] The particulars shown herein are by way of example and for purposes of illustrative discussion of the preferred implementations of the present subject matter only and are presented in the cause of providing what is believed to be the most useful and readily understood description of the principles and conceptual aspects of various implementations. In this regard, no attempt is made to show structural details of the implementations in more detail than is necessary for the fundamental understanding of the implementations, the description taken with the drawings and/or examples making apparent to those skilled in the art how the several forms of the implementations may be embodied in practice.

[0032] The following definitions and explanations are meant and intended to be controlling in any future construction unless clearly and unambiguously modified in the following examples or when application of the meaning renders any construction meaningless or essentially meaningless. In cases where the construction of the term would render it meaningless or essentially meaningless, the definition should be taken from Webster's Dictionary, 3rd Edition or a dictionary known to those of skill in the art, such as the Oxford Dictionary of Biochemistry and Molecular Biology (Ed. Anthony Smith, Oxford University Press, Oxford, 2004).

[0033] As used herein and unless otherwise indicated, the terms “a” and “an” are taken to mean “one”, “at least one” or “one or more.” Unless otherwise required by context, singular terms used herein shall include pluralities and plural terms shall include the singular.

[0034] All embodiments disclosed herein can be used in combination, unless the context clearly dictates otherwise. [0035] In a first aspect, methods are described for creating video signals comprising:

[0036] (a) receiving an input video signal comprising an input red component, an input green component, and an input blue component;

[0037] (b) determining (i) that a magnitude of the input blue component is greater than a magnitude of the input green component and (ii) a differential between the magnitude of the input blue component and the magnitude of the input green component; and

[0038] (c) sending an output video signal comprising an output red component, an output green component, and an output blue component,

[0039] wherein at least one of the following is true: (i) the output blue component is decreased by a fractional amount relative to the input blue component based on the differential; and/or (ii) the output green component is increased by a fractional amount relative to the input green component based on the differential, and the output red component is increased by a fractional amount relative to the input red component based on the differential.

[0040] The methods can be used with any electronic displays that utilize the signals of standard video (i.e.: Red, Green, Blue (RGB)). Exemplary such video displays include, but are not limited to, cathode ray tube (CRT) displays, liquid crystal displays (LCD), light-emitting diodes (LED), displays that combine LCDs with LEDs (LCD/LED), organic LED displays, plasma displays, digital light processing displays (DLP), and virtual reality (VR) headsets, among other examples.

[0041] The inventors have demonstrated that the difference in excitation levels of medium wavelength (M) or “green” cones and long wavelength (L) or “red” cones when the viewer is presented with blue sub-pixel stimuli whose output emission spectra strongly overlaps the excitation profile of M cones can induce myopia.

[0042] Television, video games, and computer monitors all cause progression of myopia in children because those displays produce stimuli that cause uneven excitation of the L and M cones. Differential activation of the L and M cones is responsible for the eye elongating abnormally during development, which in turn prevents images from being focused clearly on the retina. Unnatural blue stimuli such as those from Virtual Reality headset displays preferentially activate M cones more effectively than L cones. [0043] FIGURE la shows the simultaneous excitation of all three types of cones when exposed to red phosphor stimuli. As can be seen, the red cones are excited to a relatively large degree in FIGURE la. Similarly, FIGURE lb shows the simultaneous excitation of all three types of cones when exposed to green phosphor stimuli. As can be seen, the green cones are excited to a relatively large degree in FIGURE lb. Likewise, FIGURE 2 shows that the blue primary output spectra of a popular VR headset overlaps significantly with the spectral sensitivity profile of the M (green) cones of a typical 18 year old human, which implies that the blue sub-pixel output can also be a strong driver of LM cone contrast and thus myopia.

[0044] Color displays cause myopia by producing a difference in activity between L cones and M cones. In children and young adults whose eyes are still developing, differences in activity between neighboring cones signal the eye to grow in length. Overstimulation of the eye with images that produce activity differences between neighboring cones cause the eye to grow too long resulting in near sightedness. The amount of eye growth is proportional to the magnitude of the differences in activity between cones. The red primaries in CRTs have been matched by modem display types (including but not limited to LCD, LCD/LED, organic LED displays, plasma displays, and DLP), making virtually all electronics displays prone to causing myopia. Additionally, as described above, the blue primaries in other video display devices such as VR headsets also drive myopia due to their strong activation of M-cones. Thus, using the methods to reduce the difference in excitation levels of L and M cones can cause video displays to be more myopia-safe. Since viewers still perceive red, green, and blue even when the difference in L and M cone excitation is greatly reduced, a display can be altered such that the blues are desaturated without losing the basic coloring and image integrity of the original display.

[0045] The methods and myopia safe displays described herein produce a video output that reduces the activity differences between L and M cones produced by the display compared to a standard display, while having minimal impact on the viewing experience. This is achieved by desaturating the blue colors in the display with the goal of minimizing the superposition of M cone stimulating green emissions from the blue and green sub-pixels in the display. The technique can be used in concert with other myopia-safe display techniques (see, e.g., U.S. 9,955,133) as a secondary process to make displays even more myopia-safe. [0046] Blue primary colors which demonstrate spectral overlap of the M cone sensitivity profile as illustrated in FIGURE 3 produce large activities in M cones but little activity in L cones. Saturation or more formally, excitation purity, is defined here as the difference between a color and white along a line in the International Commission on Illumination 1931 color space (ICE 1931) chromaticity diagram in which all colors have the same hue. White is, by definition, the color that produces equal activity in all three cone types. Saturation is defined in terms of additive color mixing and has the property of being proportional to any scaling along a line in color space centered at white. The closer to white a color is the more desaturated it is, the smaller the differences in L and M cone activity and the less myopia inducing it is. However, color space is nonlinear in terms of psychophysically perceived color differences. The blue color of video displays can be desaturated to drastically reduce the myopia-genic properties of the display with only modest changes in the perceptual experience. Combining any color with its complementary color produces white. If an amount of the complementary color that is less than the amount needed to make white is added, a desaturated version of the color results.

[0047] The “input video signal” for a particular pixel (or, for analog video signals, the pixel equivalent) in a particular video frame comprises red, green, and blue components. The methods comprise determining (i) that an intensity of the blue component is greater than an intensity of the green component; and (ii) a differential between the intensity of the blue component and the intensity of the green component. Once the differential is determined, a modified output video signal is generated in which some of the blue color is desaturated to replace the original color with one less likely to induce myopia. This is done by at least one of the following processes:

[0048] (i) the output blue (B) component is decreased by a fractional amount relative to the input green component based on the differential;

[0049] and/or (ii) the output green (G) component is increased by a fractional amount relative to the input green component based on the differential, and the output red (B) component is increased by a fractional amount relative to the input blue component based on the differential.

[0050] Whenever the intensity of B is greater than G (i.e. B>G), the saturation and the potential for causing myopia can be reduced either by reducing the intensity of B. It is preferred to also increase the intensity of both the red and green components to maintain as constant a hue experience as possible. Increasing the intensity of R along with G moves the color toward white since R=G=B makes white, which is the most desaturated color. Thus, in some implementations, the method includes desaturating the blue component by increasing the output green component by a fractional amount relative to the input green component based on the differential and increasing the output red component by a fractional amount relative to the input red component based on the differential. In one further embodiment, the output green component and the output red component are increased by the same fractional amount. Ideally, for most viewing conditions, the goal is to reduce the saturation of the blue colors in the image without changing their hue or brightness. This is mainly achieved by increasing G and R equally. In another further embodiment, the output green component and the output red component are increased by different, respective, fractional amounts. Increasing R more than G or G more than R might be done to optimize the reduction of the potential for inducing myopia and to maximize the viewing experience. In a further embodiment, the output blue component may be decreased by a fractional amount relative to the input red or green component based on the differential.

[0051] In a still further embodiment of any embodiment herein, the output blue (B) component is decreased by a fractional amount relative to the input blue component based on the differential, where the R and G components are not modified. Decreasing the B-G differential by decreasing B will tend to make the color darker (decreasing its brightness) and may be done in some circumstances to optimize the reduction of the potential for inducing myopia and to maximize the viewing experience.

[0052] In each of these embodiments, the methods comprise changing an output component intensity by some fractional amount of the B-G differential, when B is greater than G. When B-G is less than or equal to zero R, G and B are unchanged. When B G is greater than zero, then R and G are increased by a percentage of B-G and/or B is decreased by a percentage of the differential. Any suitable percentage can be used as is determined appropriate for a given system and a desired level of protection against myopia. Small percentages (e.g., up to and including 25% to 45%) make the display safer and have little perceptual effect on viewing experience, while larger percentages (e.g., including 46% to 80% and greater) could make the display completely myopia safe but with some cost in terms of realism of the displayed image.

[0053] In some implementations of the embodiments disclosed above, or combinations thereof, the method of any one of the claims (e.g.,: for modification of the output signal) is carried out only if the B-G differential is greater than a predetermined threshold. As one example of such a predetermined threshold, the predetermined threshold may be set equal to 30% of B. In such a case, G must be less than or equal to 70% of B for the method to be carried out.

[0054] In a second aspect, the present disclosure describes a non-transitory and/or physical computer-readable medium having program instructions stored thereon that are executable by at least one processor, the program instructions comprising instructions that cause the processor to perform steps of any embodiment or combination of embodiments of the methods of the first aspect of the implementation. For instance, the program instructions may comprise:

[0055] (a) instructions for receiving an input video signal comprising an input red component, an input-green component, and an input-blue component;

[0056] (b) instructions for determining (i) that a magnitude of the input blue component is greater than a magnitude of the input green component and (ii) a differential between the magnitude of the input blue component and the magnitude of the input green component; and

[0057] (c) instructions for sending an output video signal comprising an output red component, an output green component, and an output blue component,

[0058] wherein at least one of the following is true: (i) the output blue component is decreased by a fractional amount relative to the input blue component based on the differential; and/or (ii) the output green component is increased by a fractional amount relative to the input green component based on the differential, and the output red component is increased by a fractional amount relative to the input red component based on the differential.

[0059] The computer-readable media can be used with any electronic display that utilizes the signals of standard video (i.e.: Red, Green, Blue (RGB)). Exemplary such video displays include, but are not limited to, cathode ray tube (CRT) displays, liquid crystal displays (LCD), light-emitting diodes (LED), displays that combine LCDs with LEDs (LCD/LED), organic LED displays, plasma displays, digital light processing displays (DLP), and virtual reality (VR) headsets.

[0060] As used herein the term “non-transitory and/or physical computer readable medium” includes magnetic disks, optical disks, organic memory, and any other volatile (e.g., Random Access Memory (“RAM”)) or non-volatile (e.g., Read- Only Memory (“ROM”)) mass storage system readable by the CPU or GPU (graphics processing unit). The computer readable medium includes cooperating or interconnected computer readable media, which exist exclusively on the processing system or which may be distributed among multiple interconnected processing systems that may be local or remote to the processing system.

[0061] FIGURE 7 is a schematic illustrating a conceptual partial view of an example computer program product that includes a computer program for executing a computer process on a computing device, arranged according to at least some embodiments presented herein.

[0062] In one embodiment, the example computer program product 700 is provided using a signal bearing medium 702. The signal bearing medium 702 may include one or more programming instructions 704 that, when executed by one or more processors may provide functionality or portions of the functionality described herein with respect to FIGURES 1-6. In some examples, the signal bearing medium 702 may encompass a computer-readable medium 706, such as, but not limited to, a hard disk drive, a Compact Disc (CD), a Digital Video Disk (DVD), a digital tape, memory, etc. In some implementations, the signal bearing medium 702 may encompass a computer recordable medium 708, such as, but not limited to, memory, read/write (R/W) CDs, R/W DVDs, etc. In some implementations, the signal bearing medium 702 may encompass a communications medium 710, such as, but not limited to, a digital and/or an analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link, etc.). Thus, for example, the signal bearing medium 702 may be conveyed by a wireless form of the communications medium 710.

[0063] The one or more programming instructions 704 may be, for example, computer executable and/or logic implemented instructions. In some examples, a computing device such as the computer system 600 of FIGURE 6 (discussed further below) may be configured to provide various operations, functions, or actions in response to the programming instructions 704 conveyed to the computer system 600 by one or more of the computer readable medium 706, the computer recordable medium 708, and/or the communications medium 710.

[0064] The non-transitory and/or physical computer readable medium could also be distributed among multiple data storage elements, which could be remotely located from each other. The computing device that executes some or all of the stored instructions could be a computer system, such as computer system 600 illustrated in FIGURE 6. Alternatively, the computing device that executes some or all of the stored instructions could be another computing device, such as a server.

[0065] In a third aspect, some implementations provide circuits for creating video signals, comprising:

[0066] a comparator configured to (i) receive a blue component input signal and a green component input signal and (ii) output a blue-green differential signal comprising a difference of the blue component input signal and the green component input signal;

[0067] a module to (i) receive the blue-green differential signal and (ii) output a blue-green scaled-differential signal when the blue component input signal is of a greater magnitude than the green component input signal;

[0068] a first adder configured to (i) receive the green component input signal and the blue-green scaled-differential signal and (ii) output a green component output signal comprising a summation of the green component input signal and the blue-green scaled-differential signal; and

[0069] a second adder configured to (i) receive a blue component input signal and the blue-green scaled-differential signal and (ii) output a red component output signal comprising a summation of the red component input signal and the blue- green scaled differential signal.

[0070] The blue-green differential signal received by the first adder and the second adder may be a varied blue-green differential signal, and an analog version of the circuit may further include a resistive element configured to receive the blue- green differential signal and output the varied blue-green differential signal. The resistive element may be one of a static resistor, a variable resistor, a rheostat, a variable potentiometer, and a digitally programmable resistor. And, each, or any of the comparator or adders may be a respective differential amplifier. In an embodiment, the second comparator may be a feedback differential amplifier. Alternatively, the circuit may be implemented entirely using digital logic.

[0071] The circuit may further include a subtractor configured to (i) receive the blue component input signal and the blue-green scaled-differential signal and (ii) output a blue component output signal comprising a difference of the blue component input signal and the blue-green scaled-differential signal. The subtractor may be implemented in digital logic or using a differential amplifier in the case of an analog circuit.

[0072] As will be understood by those of skill in the art, the circuit may include additional and/or alternative circuit elements necessary to achieve any particular desired functionality, including any of the functionality described above with respect to the first aspect. As but a few examples, the circuit may include additional static resistive elements, variable resistive elements, direct-current power sources, alternating-current power sources, and/or differential amplifiers.

[0073] The third aspect of the implementations described herein is discussed further below with respect to FIGURE 4 and FIG 5.

[0074] In a fourth aspect, the devices are described for creating video signals, comprising:

[0075] a comparator unit configured to: receive a blue component input signal and a green component input signal; compare a magnitude of the blue component input signal and a magnitude of the green component input signal; and output a blue-green differential signal when the magnitude of the blue component input signal is greater than at least the magnitude of the green component input signal; and

[0076] an adding unit configured to: receive (i) the blue-green differential signal, (ii) the green component input signal, and (iii) a red component input signal; output a green component output signal comprising a summation of the green component input signal and the blue-green differential signal; and output a red component output signal comprising a summation of the red component input signal and the blue-green differential signal.

[0077] The comparator unit may operate according to the comparator functionality described above with respect to the first aspect of the implementation described herein, and/or may be implemented using any suitable aspects of the circuitry, computing device, or computer readable medium described above with respect to the second and third aspects of the implementation discussed above. Thus, the comparator unit may include the functionality described above with respect to the third aspect of the implementation described herein.

[0078] Further, the adding unit may operate according to the adding functionality described above with respect to the first aspect of the implementation described herein, and/or may be implemented using any suitable aspects of the circuitry, computing device, or computer readable medium described above with respect to the second and third aspects discussed above, respectively. Thus, the adding unit may include the adders described above with respect to the third aspect of the implementation described herein.

[0079] The comparator may output the blue-green differential signal when the magnitude of the blue component input signal is greater than the sum of the magnitude of the green component input signal and a threshold value. The blue-green differential signal received by the adding unit may be a varied blue-green differential signal and the device may also include a scaling unit configured to receive the blue- green differential signal and output the varied blue-green differential signal. The scaling unit may include at least one of a static resistor, a variable resistor, a rheostat, a variable potentiometer, and a digitally programmable resistor.

[0080] The device may further include a display unit such as at least one of a cathode ray tube (CRT) display, a liquid crystal display (LCD), a light-emitting diode (LED) display, an organic LED display, a plasma display, a digital light processing (DLP) display, or virtual reality (VR) headset for some examples. In an embodiment, the display device may be configured to receive at least one of (i) the blue component input signal, (ii) the red component output signal, and (iii) the green component output signal and to display the received at least one of (i) the blue component input signal, (ii) the red component output signal, and (iii) the green component output signal.

[0081] In an alternative embodiment, the device may further include a subtracting unit configured to receive (i) the blue-green differential signal and (ii) the blue component input signal and to output a blue component output signal comprising a difference of the blue component input signal and the blue-green scaled-differential signal. In such an embodiment, the display device may be configured to receive at least one of (i) the red component output signal, (ii) the green component output signal, and (iii) the blue component output signal and to display the received at least one of (i) the red component output signal, (ii) the green component output signal, and (iii) the blue component output signal.

[0082] The fourth aspect of the implementation described herein is discussed further below with respect to FIGURE 4 and FIG 5.

[0083] FIGURE 8 is a flow chart illustrating an example method 800 for creating myopia-safe video signals. At step 802, method 800 involves receiving an input video signal comprising an input red component, an input green component, and an input blue component. At step 804, method 800 involves determining (i) that a magnitude of the input blue component is greater than a magnitude of the input green component and (ii) a differential between the magnitude of the input blue component and the magnitude of the input green component FIGURE 8 is a flow chart illustrating an example method 800 for creating myopia-safe video signals. At step 802, method 800 involves receiving an input video signal comprising an input red component, an input green component, and an input blue component. At step 804, method 800 involves determining (i) that a magnitude of the input blue component is greater than a magnitude of the input green component and (ii) a differential between the magnitude of the input blue component and the magnitude of the input green component. And at step 806, method 800 involves sending an output video signal comprising an output red component, an output green component, and an output blue component. In accordance with step 806, at least one of the following is true: (i) the output blue component is decreased by a fractional amount relative to the input blue component based on the differential; and/or (ii) the output green component is increased by a fractional amount relative to the input green component based on the differential, and the output red component is increased by a fractional amount relative to the input red component based on the differential.

[0084] As will be discussed further, FIGURE 4 and FIGURE 5 are two example schematics of the circuit logic needed to carry out example method 800. Method 800 may be carried out by other circuits, devices, and/or components thereof as well. As is the case with many display types, the example schematics of FIGURE 4 and FIGURE 5 have three components, segregated by color: red, green, and blue. [0085] Returning to method 800, at step 802, method 800 involves receiving an input video signal comprising an input red component, an input green component, and an input blue component. The method analyzes those three input components and adjusts them to create a more myopia-safe display. Herein, the phrase “myopia-safe” refers to any appreciable reduction in the risk of myopia that a stimuli projects — it does not necessarily correspond to a complete lack of risk of myopia. Input red component 402 (or 502), input green component 404 (or 504), and input blue component 406 (or 506) are shown in example schematic 400 (or 500, respectively).

[0086] At step 804, method 800 involves determining (i) that a magnitude of the input blue component is greater than a magnitude of the input green component and (ii) a differential between the magnitude of the input blue component and the magnitude of the input green component. In accordance with step 804, the method first differences the input blue and green channels (B-G), or differences input blue component 406 (or 506) and input green component 404 (or 504). Such a difference may be accomplished via the differencing element 408 (or 508). When the input green component 404 (or 504) is of greater magnitude than the input blue component 406 (or 506) (i.e. G>B or B-G<0), no change in the output is needed.

[0087] However, when a positive difference between input blue component 402 (or 502) and input green component 404 (or 504) exists and is large (or greater than some threshold value), there is a danger of inducing myopia. The method then comprises determining an appropriately scaled differential based on the B-G difference. In one embodiment, to accomplish this, the method sends the determined differential through a variable gain element 510 to determine an appropriately scaled differential.

[0088] At step 806, method 800 involves sending an output video signal comprising an output red component, an output green component, and an output blue component. In accordance with step 806, at least one of the following is true: (i) the output blue component is decreased by a fractional amount relative to the input blue component based on the differential; and/or (ii) the output green component is increased by a fractional amount relative to the input green component based on the differential, and the output red component is increased by a fractional amount relative to the input red component based on the differential. [0089] Accordingly, as shown in schematic 400, the system sends the blue-green differential signal (including threshold value, t) 412 through to the output red component 418, the output green component 414, and the output blue component 416. The output green component 414 and the output red component 416 are then incremented by the net blue-green differential or a null signal 412, in an amount proportional to the B G difference.

[0090] In an embodiment, the output blue component 418 (or 518) remains the same as the input blue component. That is, the output blue component 416 (or 516) is not decreased by the fractional amount relative to the input green component 402 (or 502). Such an arrangement is depicted in example circuit schematic 400 (and 500). In an alternative embodiment, the output blue component may be decreased by a fractional amount relative to the input blue component based on the differential. According to such an embodiment, the blue component may include a subtractor configured to subtract the differential 412 (or 512) from the input blue component 406 (or 506).

[0091] As disclosed herein, other embodiments involve decreasing the blue component, increasing the green and red components by different amounts, and combinations thereof. For instance, the output green component 414 (or 514) may be increased by a fractional amount relative to the input green component 404 (or 504) based on the differential 412 (or 512) and the output red component 418 (or 518) may be increased by a fractional amount relative to the input red component 402 (or 502) based on the differential 412 (or 512). In an embodiment the output green component 414 (or 514) and the output red component 418 (or 518) are increased by the same fractional amount. In such an embodiment, adder 420 (or 520) and adder 422 (or 522) may be configured similarly.

[0092] In an alternative embodiment, the output green component 414 (and 514) and the output blue component 416 (or 516) are increased by different, respective, fractional amounts. In such an embodiment, adder 420 (or 520) and adder 422 (or 522) may be configured differently so as to cause the desired different increases.

[0093] The incrementing of the green and red channels (or components) corresponds to an increase of the green and red components that are eventually displayed, which “desaturates” the blue color (i.e., brings it closer to white) and reduces the differential activity of the L (red) and M (green) cones. For a typical red stimuli, the difference in activity between L and M cones is much greater than is needed to produce a red percept. That is, though there is an 80% differential between the excitation of green cones and red cones in the presence of light emitted by the red phosphor, a person would still perceive the stimuli as “red” if the excitation of the green cones were increased to be much closer than, but still short of, that of the red cones. Therefore, a viewer can tolerate a large amount of red “desaturation” before a significant degradation in the red color (or hue) is perceived. Likewise, a viewer can tolerate a large amount of blue “desaturation” before a significant degradation in the blue color (or hue) is perceived. Thus, the myopia producing difference between L and M cones is reduced by addition of the scaled differential to the green and red channels without much perceived loss in the quality of the color display.

[0094] FIGURE 9a shows a model unit cube representing a standard red, green, and blue (RGB) color model display. The standard RGB model of FIGURE 9a is a unit cube situated on three axes: red, green, and blue. The RGB model or display is capable of rendering all colors within the unit cube, and each point within the cube represents a renderable color. The coordinates of a point represent the intensities of each primary axis color in rendering the color of the point. For example, the colors red, green, and blue are points at the comers (1, 0, 0), (0, 1, 0), and (0, 0, 1) respectively. Black is at (0, 0, 0), white is at (1, 1, 1), and grays are any equal combination of the three primaries (e.g. (0.5, 0.5, 0.5) or (0.13, 0.13, 0.13)). These grays are represented by a straight line, labeled gray scale, which passes through both the black (origin) and the white (1, 1, 1) points. All achromatic lights — black, white, and all levels of gray — lie along this line, and it can therefore be referred to as the “achromatic line.” Cyan (0, 1, 1) is the combination of green and blue with no red; magenta (1, 0, 1) is the combination of blue and red with no green; and yellow (1 1, 0) is the combination of red and green with no blue.

[0095] FIGURE 9b shows the model unit cube of FIGURE 9a, as it would be altered in a system of the implementations described herein, for example after the implementation of the method described in reference to FIGURES 4, 5 and 8. The system modifies a typical RGB system so that when blues are presented alone, a portion of their energy is diverted into both the green and red channels, moving the color closer to the achromatic line. (The respective red, green, and blue coordinates may be thought of as relative intensities, such that where the blue channel intensity remains constant and the red and green intensities are increased, a corresponding point in the cube would move positively along the green and red axes.) The amount of energy diverted to the other channels is a fraction (between zero and one) multiple of the blue channel. FIGURE 9c shows the model unit cube of FIGURE 9a, as it may be altered in a system designed based on other methodologies (see, e.g., U.S. 9,955,133). Likewise, FIGURE 9d shows a combination of effects of both methods, as might be realized in a real application in which both techniques were implemented sequentially.

[0096] A circuit could be implemented using any suitable components, digital or analog, capable of performing these basic operations (e.g. digital signal processors, microcontrollers, microprocessors, graphics processors, FPGAs, ASICs, etc.), or could be implemented in software. As understood by those of skill in the art, the outputs may be provided, generated, or otherwise created by any suitable combination of circuit elements, hardware, and/or software.

[0097] FIGURE 6 is a block diagram of an example computing device 600 capable of implementing the embodiments described above and other embodiments. Example computing device 600 includes a processor 602, data storage 604, and a communication interface 606, all of which may be communicatively linked together by a system bus, network, or other mechanism 608. Processor 602 may comprise one or more general purpose processors (e.g., INTEL microprocessors) or one or more special purpose processors (e.g., digital signal processors, FPGA, ASIC, etc.) Communication interface 606 may allow data to be transferred between processor 602 and input or output devices or other computing devices, perhaps over an internal network or the Internet. Instructions and/or data structures may be transmitted over the communication interface 606 via a propagated signal on a propagation medium (e.g., electromagnetic wave(s), sound wave(s), etc.). Data storage 604, in turn, may comprise one or more storage components or physical and/or non-transitory computer-readable media, such as magnetic, optical, or organic storage mechanisms, and may be integrated in whole or in part with processor 602. Data storage 604 may contain program logic 610.

[0098] Program logic 610 may comprise machine language instructions or other sorts of program instructions executable by processor 602 to carry out the various functions described herein. For instance, program logic 610 may define logic executable by processor 602, to receive video display channel inputs, to adjust those inputs according to the methods of the implementations described herein, and to output the adjusted video display channels. In alternative embodiments, it should be understood that these logical functions can be implemented by firmware or hardware, or by any combination of software, firmware, and hardware.

[0099] Exemplary embodiments of the implementations described herein have been described above. Those skilled in the art will understand, however, that changes and modifications may be made to the embodiments described without departing from the true scope and spirit of the implementations described herein. For example, the depicted flow charts may be altered in a variety of ways. For instance, the order of the steps may be rearranged, steps may be performed in parallel, steps may be omitted, or other steps may be included. Accordingly, the disclosure is not limited except as by the appended claims. All embodiments of the implementations described herein may be combined in any combination unless the context clearly dictates otherwise. And at step 806, method 800 involves sending an output video signal comprising an output red component, an output green component, and an output blue component. In accordance with step 806, at least one of the following is true: (i) the output blue component is decreased by a fractional amount relative to the input blue component based on the differential; and/or (ii) the output green component is increased by a fractional amount relative to the input green component based on the differential, and the output red component is increased by a fractional amount relative to the input red component based on the differential.

[00100] As will be discussed further, FIGURE 4 and FIGURE 5 are two example schematics of the circuit logic needed to carry out example method 800. Method 800 may be carried out by other circuits, devices, and/or components thereof as well. As is the case with many display types, the example schematics of FIGURE 4 and FIGURE 5 have three components, segregated by color: red, green, and blue.

[00101] Returning to method 800, at step 802, method 800 involves receiving an input video signal comprising an input red component, an input green component, and an input blue component. The method analyzes those three input components and adjusts them to create a more myopia-safe display. Herein, the phrase “myopia-safe” refers to any appreciable reduction in the risk of myopia that a stimuli projects — it does not necessarily correspond to a complete lack of risk of myopia. Input red component 402 (or 502), input green component 404 (or 504), and input blue component 406 (or 506) are shown in example schematic 400 (or 500, respectively).

[00102] At step 804, method 800 involves determining (i) that a magnitude of the input blue component is greater than a magnitude of the input green component and (ii) a differential between the magnitude of the input blue component and the magnitude of the input green component. In accordance with step 804, the method first differences the input blue and green channels (B-G), or differences input blue component 406 (or 506) and input green component 404 (or 504). Such a difference may be accomplished via the differencing element 408 (or 508). When the input green component 404 (or 504) is of greater magnitude than the input blue component 406 (or 506) (i.e. G>B or B-GO), no change in the output is needed.

[00103] However, when a positive difference between input blue component 402 (or 502) and input green component 404 (or 504) exists and is large (or greater than some threshold value), there is a danger of inducing myopia. The method then comprises determining an appropriately scaled differential based on the B-G difference. In one embodiment, to accomplish this, the method sends the determined differential through a variable gain element 510 to determine an appropriately scaled differential.

[00104] At step 806, method 800 involves sending an output video signal comprising an output red component, an output green component, and an output blue component. In accordance with step 806, at least one of the following is true: (i) the output blue component is decreased by a fractional amount relative to the input blue component based on the differential; and/or (ii) the output green component is increased by a fractional amount relative to the input green component based on the differential, and the output red component is increased by a fractional amount relative to the input red component based on the differential.

[00105] Accordingly, as shown in schematic 400, the system sends the blue-green differential signal (including threshold value, t) 412 through to the output red component 418, the output green component 414, and the output blue component 416. The output green component 414 and the output red component 416 are then incremented by the net blue-green differential or a null signal 412, in an amount proportional to the B-G difference.

[00106] In an embodiment, the output blue component 418 (or 518) remains the same as the input blue component. That is, the output blue component 416 (or 516) is not decreased by the fractional amount relative to the input green component 402 (or 502). Such an arrangement is depicted in example circuit schematic 400 (and 500). In an alternative embodiment, the output blue component may be decreased by a fractional amount relative to the input blue component based on the differential. According to such an embodiment, the blue component may include a subtractor configured to subtract the differential 412 (or 512) from the input blue component 406 (or 506).

[00107] As disclosed herein, other embodiments involve decreasing the blue component, increasing the green and red components by different amounts, and combinations thereof. For instance, the output green component 414 (or 514) may be increased by a fractional amount relative to the input green component 404 (or 504) based on the differential 412 (or 512) and the output red component 418 (or 518) may be increased by a fractional amount relative to the input red component 402 (or 502) based on the differential 412 (or 512). In an embodiment the output green component 414 (or 514) and the output red component 418 (or 518) are increased by the same fractional amount. In such an embodiment, adder 420 (or 520) and adder 422 (or 522) may be configured similarly.

[00108] In an alternative embodiment, the output green component 414 (and 514) and the output blue component 416 (or 516) are increased by different, respective, fractional amounts. In such an embodiment, adder 420 (or 520) and adder 422 (or 522) may be configured differently so as to cause the desired different increases.

[00109] The incrementing of the green and red channels (or components) corresponds to an increase of the green and red components that are eventually displayed, which “desaturates” the blue color (i.e., brings it closer to white) and reduces the differential activity of the L (red) and M (green) cones. For a typical red stimuli, the difference in activity between L and M cones is much greater than is needed to produce a red percept. That is, though there is an 80% differential between the excitation of green cones and red cones in the presence of light emitted by the red phosphor, a person would still perceive the stimuli as “red” if the excitation of the green cones were increased to be much closer than, but still short of, that of the red cones. Therefore, a viewer can tolerate a large amount of red “desaturation” before a significant degradation in the red color (or hue) is perceived. Likewise, a viewer can tolerate a large amount of blue “desaturation” before a significant degradation in the blue color (or hue) is perceived. Thus, the myopia producing difference between L and M cones is reduced by addition of the scaled differential to the green and red channels without much perceived loss in the quality of the color display.

[00110] FIGURE 9a shows a model unit cube representing a standard red, green, and blue (RGB) color model display. The standard RGB model of FIGURE 9a is a unit cube situated on three axes: red, green, and blue. The RGB model or display is capable of rendering all colors within the unit cube, and each point within the cube represents a renderable color. The coordinates of a point represent the intensities of each primary axis color in rendering the color of the point. For example, the colors red, green, and blue are points at the comers (1, 0, 0), (0, 1, 0), and (0, 0, 1) respectively. Black is at (0, 0, 0), white is at (1, 1, 1), and grays are any equal combination of the three primaries (e.g. (0.5, 0.5, 0.5) or (0.13, 0.13, 0.13)). These grays are represented by a straight line, labeled gray scale, which passes through both the black (origin) and the white (1, 1, 1) points. All achromatic lights — black, white, and all levels of gray — lie along this line, and it can therefore be referred to as the “achromatic line.” Cyan (0, 1, 1) is the combination of green and blue with no red; magenta (1, 0, 1) is the combination of blue and red with no green; and yellow (1 1, 0) is the combination of red and green with no blue.

[00111] FIGURE 9b shows the model unit cube of FIGURE 9a, as it would be altered in a system of the implementations described herein, for example after the implementation of the method described in reference to FIGURE 8, FIGURE 4, and FIGURE 5. The system modifies a typical RGB system so that when blues are presented alone, a portion of their energy is diverted into both the green and red channels, moving the color closer to the achromatic line. (The respective red, green, and blue coordinates may be thought of as relative intensities, such that where the blue channel intensity remains constant and the red and green intensities are increased, a corresponding point in the cube would move positively along the green and red axes.) The amount of energy diverted to the other channels is a fraction (between zero and one) multiple of the blue channel. FIGURE 9c shows the model unit cube of FIGURE 9a, as it would be altered in a system designed based on other methodologies (see, e.g., U.S. 9,955,133). Likewise, FIGURE 9d shows a combination of effects of both methods, as might be realized in a real application in which both techniques were implemented sequentially.

[00112] A circuit could be implemented using any suitable components, digital or analog, capable of performing these basic operations (e.g. digital signal processors, microcontrollers, microprocessors, graphics processors, FPGAs, ASICs, etc.), or could be implemented in software. As understood by those of skill in the art, the outputs may be provided, generated, or otherwise created by any suitable combination of circuit elements, hardware, and/or software.

[00113] FIGURE 6 is a block diagram of an example computing device 600 capable of implementing the embodiments described above and other embodiments. Example computing device 600 includes a processor 602, data storage 604, and a communication interface 606, all of which may be communicatively linked together by a system bus, network, or other mechanism 608. Processor 602 may comprise one or more general purpose processors (e.g., INTEL microprocessors) or one or more special purpose processors (e.g., digital signal processors, FPGA, ASIC, etc.) Communication interface 606 may allow data to be transferred between processor 602 and input or output devices or other computing devices, perhaps over an internal network or the Internet. Instructions and/or data structures may be transmitted over the communication interface 606 via a propagated signal on a propagation medium (e.g., electromagnetic wave(s), sound wave(s), etc.). Data storage 604, in turn, may comprise one or more storage components or physical and/or non-transitory computer-readable media, such as magnetic, optical, or organic storage mechanisms, and may be integrated in whole or in part with processor 602. Data storage 604 may contain program logic 610.

[00114] Program logic 610 may comprise machine language instructions or other sorts of program instructions executable by processor 602 to carry out the various functions described herein. For instance, program logic 610 may define logic executable by processor 602, to receive video display channel inputs, to adjust those inputs according to the methods of the implementations described herein, and to output the adjusted video display channels. In alternative embodiments, it should be understood that these logical functions can be implemented by firmware or hardware, or by any combination of software, firmware, and hardware.

[00115] Exemplary embodiments of the implementations described herein have been described above. Those skilled in the art will understand, however, that changes and modifications may be made to the embodiments described without departing from the true scope and spirit of the implementations described herein. For example, the depicted flow charts may be altered in a variety of ways. For instance, the order of the steps may be rearranged, steps may be performed in parallel, steps may be omitted, or other steps may be included. Accordingly, the disclosure is not limited except as by the appended claims. All embodiments of the implementations described herein may be combined in any combination unless the context clearly dictates otherwise.

[00116] Various implementations of the subject matter described herein can be realized/implemented in digital electronic circuitry, integrated circuitry, specially designed application specific integrated circuits (ASICs), computer hardware, firmware, software, and/or combinations thereof. These various implementations can be implemented in one or more computer programs. These computer programs can be executable and/or interpreted on a programmable system. The programmable system can include at least one programmable processor, which can have a special purpose or a general purpose. The at least one programmable processor can be coupled to a storage system, at least one input device, and at least one output device. The at least one programmable processor can receive data and instructions from, and can transmit data and instructions to, the storage system, the at least one input device, and the at least one output device.

[00117] These computer programs (also known as programs, software, software applications or code) can include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As can be used herein, the term “machine-readable medium” can refer to any computer program product, apparatus and/or device (for example, magnetic discs, optical disks, memory, programmable logic devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that can receive machine instructions as a machine-readable signal. The term “machine-readable signal” can refer to any signal used to provide machine instructions and/or data to a programmable processor.

[00118] To provide for interaction with a user, the subject matter described herein can be implemented on a computer that can display data to one or more users on a display device, such as a cathode ray tube (CRT) device, a liquid crystal display (LCD) monitor, a light emitting diode (LED) monitor, or any other display device.

The computer can receive data from the one or more users via a keyboard, a mouse, a trackball, a joystick, or any other input device. To provide for interaction with the user, other devices can also be provided, such as devices operating based on user feedback, which can include sensory feedback, such as visual feedback, auditory feedback, tactile feedback, and any other feedback. The input from the user can be received in any form, such as acoustic input, speech input, tactile input, or any other input.

[00119] The subject matter described herein can be implemented in a computing system that can include at least one of a back-end component, a middleware component, a front-end component, and one or more combinations thereof. The back-end component can be a data server. The middleware component can be an application server. The front-end component can be a client computer having a graphical user interface or a web browser, through which a user can interact with an implementation of the subject matter described herein. The components of the system can be interconnected by any form or medium of digital data communication, such as a communication network. Examples of communication networks can include a local area network, a wide area network, internet, intranet, Bluetooth network, infrared network, or other networks.

[00120] The computing system can include clients and servers. A client and server can be generally remote from each other and can interact through a communication network. The relationship of client and server can arise by virtue of computer programs running on the respective computers and having a client-server relationship with each other.

[00121] Although a few variations have been described in detail above, other modifications can be possible. For example, the logic flows depicted in the accompanying figures and described herein do not require the particular order shown, or sequential order, to achieve desirable results. Other embodiments may be within the scope of the following claims.