Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
UNOBTRUSIVE FUNDUS IMAGING
Document Type and Number:
WIPO Patent Application WO/2017/167587
Kind Code:
A1
Abstract:
A system may include: a planar surface (110, 510, 610) on which to render visual content (124, 524, 624) to a user positioned at distance from the planar surface; a camera (112, 512, 612); one or more processors (106, 506, 606); and memory (108, 508, 608) operably coupled with the one or more processors. The memory may store instructions that cause the one or more processors to: identify a portion (132) of the user's fundus to be targeted by the camera; calculate a target position on the planar surface that, when focused on by the user, causes the identified portion of the user's fundus to be within a field of view (122) of the camera; render a graphical element (124) at the target position; and while the graphical element is rendered at the target position, cause the camera to capture an image of the targeted portion of the user's fundus.

Inventors:
DE BRUIJN FREDERIK JAN (NL)
LUCASSEN GERHARDUS WILHELMUS (NL)
PAULUSSEN IGOR WILHELMUS FRANCISCUS (NL)
Application Number:
PCT/EP2017/056331
Publication Date:
October 05, 2017
Filing Date:
March 17, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
KONINKLIJKE PHILIPS NV (NL)
International Classes:
A61B3/12; G06V10/141
Domestic Patent References:
WO2016028877A12016-02-25
Foreign References:
EP2394569A12011-12-14
US8934005B22015-01-13
Attorney, Agent or Firm:
KRUK, Arno et al. (NL)
Download PDF:
Claims:
CLAIMS:

What is claimed is:

1. A system, comprising:

a planar surface (110, 510, 610) on which to render visual content (124, 524, 624) to a user positioned at distance from the planar surface;

a camera (112, 512, 612) aimed towards the user;

one or more processors (106, 506, 606) operably coupled with the camera; and memory (108, 508, 608) operably coupled with the one or more processors, the memory storing instructions that when executed by the one or more processors, cause the one or more processors to:

identify a portion (132) of a fundus of the user to be targeted by the camera; calculate a target position on the planar surface that, when focused on by the user, causes the targeted portion of the user's fundus to be within a field of view (122) of the camera;

render a graphical element (124) at the target position on the planar surface; and

while the graphical element is rendered at the target position, cause the camera to capture one or more images of the targeted portion of the user's fundus.

2. The system of claim 1, wherein the memory further stores instructions to cause the one or more processors to identify the portion of the user's fundus to be targeted by the camera based at least in part on a detected location of a pupil of the user within the field of view of the camera.

3. The system of claim 1 or 2, wherein the memory further stores instructions to cause the one or more processors to identify the portion of the user's fundus to be targeted by the camera based at least in part on a record of one or more previously-targeted portions of the user's fundus.

4. The system of any one of claims 1, 2 or 3, wherein the memory further stores instructions to cause the one or more processors to:

detect a lateral shift by the user relative to the planar surface;

calculate, based on the detected lateral shift, an updated target position on the planar surface that, when focused on by the user, causes the targeted portion of the user's fundus to be within the field of view of the camera; and

render the graphical element at the updated target position.

5. The system of any one of the preceding claims, further comprising a light source (116, 616) operably coupled with the one or more processors, wherein the memory further stores instructions to cause the one or more processors to operate the light source to provide coaxial illumination towards the targeted portion of the user's fundus.

6. The system of claim 5, further comprising a semi-transparent mirror (118, 618) angled relative to the light source and the camera to guide light emitted by the light source towards the targeted portion of the user's fundus.

7. The system of claim 5 or 6, wherein the memory further stores instructions to cause the one or more processors to:

operate the light source and the camera to capture two or more successive images of the targeted portion of the user's fundus that alternate between being coaxially illuminated and non-coaxially illuminated; and

generate a composite image of the targeted portion of the user's fundus based on the two or more successive images.

8. The system of claim 7, wherein the memory further stores instructions to cause the one or more processors to subtract one of the two or more successive images that is non- coaxially illuminated from another of the two or more successive images that is coaxially illuminated.

9. The system of any one of claims 5 to 8, wherein the memory further stores instructions to cause the one or more processors to:

operate the light source to project a calibration light pattern onto a surface of an eye of the user's that includes the fundus; detect a sharpness of the projected calibration light pattern; and cause the camera to capture one or more images of the user's fundus while the detected sharpness of the projected calibration light pattern satisfies a sharpness threshold.

10. The system of any one of the preceding claims, wherein the target position of the user's fundus is a first target position, and wherein the memory further stores instructions to cause the one or more processors to:

identify a second portion of the user's fundus to be targeted by the camera; calculate a second target position on the planar surface that, when focused on by the user, causes the second targeted portion of the user's fundus to be within the field of view of the camera;

render a graphical element at the second target position on the planar surface; while the graphical element is rendered at the second target position, cause the camera to capture one or more images of the second target position of the user's fundus; and stitch together the one or more images of the first target position of the user's fundus with the one or more images of the second target position of the user's fundus to generate one or more composite images of the user's fundus.

11. The system of any one of the preceding claims, wherein the memory further stores instructions to cause the one or more processors to:

cause the camera to capture one or more images of skin of the user simultaneously with capture of the one or more images of the targeted portion of the user's fundus;

determine a momentary phase in a cardiac cycle of the user based on the captured one or more images of the skin of the user; and

cause the camera to capture one or more additional images of the target position of the user's fundus at a moment selected based at least in part on the determined momentary phase in the cardiac cycle of the user.

12. The system of claim 11, wherein the memory further stores instructions to cause the one or more processors to:

cause the camera to capture multiple images of the target position of the user's fundus over the cardiac cycle of the user; and generate a composite sequence of images that collectively depict an effect of blood flow during one single cardiac cycle of the user.

13. The system of any one of the preceding claims, wherein the memory further stores instructions to cause the one or more processors to:

detect a blood vessel; and

alter a position on the planar surface at which the graphical element is rendered to cause an optic disc (134) of the user to enter the field of view of the camera. 14. The system of any one of the preceding claims, further comprising a projector

(670) operably coupled with the one or more processors, wherein the planar surface comprises a projection surface, and wherein the memory further stores instructions to cause the one or more processors to operate the projector to project a rendition of the graphical element onto the projector surface.

15. The system of any one of claims 1 to 13, wherein the planar surface comprises an electronic display.

16. The system of claim 15, wherein the electronic display is a touchscreen.

17. The system of claim 15, wherein the electronic display is a smart mirror display.

18. The system of any one of the preceding claims, wherein a focal plane of the camera is coplanar with the planar surface.

19. The system of any one of claims 1 to 13 or 15-17, wherein a focal plane of the camera is on an opposite side of the camera from the planar surface. 20. The system of claim 19, wherein the system comprises a smart phone, and the planar surface comprises a touchscreen (560).

21. The system of claim 20, wherein the camera comprises a camera (512) of the smart phone.

22. The system of any one of the preceding claims, wherein the graphical element comprises a clock or weather indicator. 23. The system of any one of the preceding claims, wherein the graphical element portrays a status of a personal care device or a personal health status of the user.

24. A method comprising:

adjusting (202), by one or more processors, a focus setting of a camera so that a focal plane of the camera coincides with a planar surface on which visual content is rendered for consumption by a user spaced from the planar surface;

identifying (204), by the one or more processors, a portion of the user's fundus to be targeted by the camera;

calculating (208), by the one or more processors, a target position on the planar surface that, when focused on by the user, causes the targeted portion of the user's fundus to be within a field of view of the camera;

rendering (210), by the one or more processors, a graphical element at the target position on the planar surface; and

while the graphical element is rendered at the target position, causing (212), by the one or more processors, the camera to capture one or more images of the targeted portion of the user's fundus.

Description:
Unobtrusive fundus imaging

FIELD OF THE INVENTION

The present disclosure is directed generally to health care. More particularly, various inventive methods and apparatus disclosed herein relate to unobtrusive and/or noninvasive fundus imaging.

BACKGROUND OF THE INVENTION

Various aspects of the human eye are known to reflect physiological changes to, or conditions of, other parts of human body. For example, the light sensitive, image- capturing tissue of the eye referred to as the "retina" is known to exhibit a variety of characteristics that make it a valuable source of a diagnostic information for eye-related diseases, such as glaucoma, as well as non-eye-related diseases such as diabetes. A unique feature of the retina is that it offers a relatively obstruction-free view of various arteries and veins, without being inhibited by occluding and/or scattering layers such as the skin. This enables ophthalmologists to check for the presence of vascular anomalies such as aneurisms and even neovascularization. Additionally, the unobstructed view on the retinal blood vessels poses an advantage in assessment of the levels of various metabolic compounds carried by the vascular system by exploiting the differences in characteristic absorption of visible and invisible ultraviolet or infrared light.

Traditionally, an ophthalmologist examines a patient's eye using an ophthalmoscope, which provides a direct view of a fragment of the retina under coaxial illumination from a built-in light source. Ophthalmoscopes come in various forms and can be used for "direct" observation or for "indirect" observation by using a relay lens held separately near the eye. Direct ophthalmoscopy, e.g., with a classical ophthalmoscope or the newer "panoptic" ophthalmoscopes, is typically performed relatively close to the eye. In some instances, a portion of the ophthalmoscope may even rest softly on the orbital region of the patient's eye for support. By contrast, indirect ophthalmoscopy can be done from a distance but requires a handheld relay lens to be held close to the eye.

Ophthalmoscopes are typically used by trained professionals in medical settings, which means patients' eyes are seldom examined for retinal symptoms other than during the occasional eye exam for reading glasses or the Amsler test. Consequently, markers of various diseases that may be observable in patients' eyes often remain undetected. Moreover, current methods of retinal imaging (e.g., using an ophthalmoscope) are obtrusive and completely block the visual field of the examined eye. Thus, there is a need in the art for more frequent and less intrusive retinal monitoring.

SUMMARY OF THE INVENTION

The present disclosure is directed to inventive methods and apparatus for unobtrusive fundus imaging. For example, in various embodiments, a user may position themselves in front of a so-called "smart mirror," or another generally planar display (e.g., a television, computer monitor, a tablet computer, a smart phone, etc.). The user may be prompted to focus her eyes on the display, e.g., by rendering and/or projecting one or more graphical elements on the display. This causes the user's focal plane to be aligned with a plane defined by the planar surface. In some embodiments, an imaging device such as a camera may be focused so that its focal plane also coincides with the plane defined by the planar surface. In other embodiments, such as where the planar display is a tablet computer or smart phone with an integrated front-facing camera, the camera may be focused (e.g., using an external optical element) so that its focal plane is located behind the tablet or smart phone. In either case, the user may be prompted to move her gaze to one or more locations on the planar display, so that various portions of the user's fundus are exposed to the camera's field of view. The camera may capture one or more images of various portions of the user's fundus, and these images may be used collectively or individually for a variety of diagnostic purposes.

Generally, in one aspect, a system may include: a planar surface on which to render visual content to a user positioned at distance from the planar surface; a camera aimed towards the user; and one or more processors operably coupled with the camera. The one or more processors may be configured to: identify a portion of the user's fundus to be targeted by the camera; calculate a target position on the planar surface that, when focused on by the user, causes the identified portion of the user's fundus to be within a field of view of the camera; render a graphical element at the target position on the planar surface; and while the graphical element is rendered at the target position, cause the camera to capture one or more images of the targeted portion of the user's fundus.

In various embodiments, the planar surface may be an electronic display. In various versions, the electronic display may be a touchscreen. In various versions, the electronic display may be a smart mirror display. In various embodiments, the system may include a projector operably coupled with the one or more processors, the planar surface may be a projection surface, and the one or more processors may cause the projector to project a rendition of the graphical element onto the projector surface.

In various embodiments, the memory may store instructions that cause the one or more processors to: detect a lateral shift by the user relative to the planar surface;

calculate, based on the detected lateral shift, an updated target position on the planar surface that, when focused on by the user, causes the targeted portion of the user's fundus to be within a field of view of the camera; and render the graphical element at the updated target position.

In various embodiments, a focal plane of the camera may be coplanar with the planar surface. In various embodiments, a focal plane of the camera may be on an opposite side of the camera from the planar surface. In various embodiments, the system may include a smart phone, and the planar surface may be a touchscreen of the smart phone. In various versions, the camera may include a camera of the smart phone.

In various embodiments, the graphical element may be a clock or weather indicator, or may portray a status of a personal care device or a personal health status of the user. In various embodiments, the portion of the user's fundus to be targeted by the camera may be identified based at least in part on a record of one or more previously-targeted portions of the user's fundus. In various embodiments, the portion of the user's fundus to be targeted by the camera may be identified based at least in part on a detected location of a pupil of the user within field of view of the camera.

In various versions, a light source may be operably coupled with the one or more processors, and the one or more processors may operate the light source to provide coaxial illumination towards the targeted portion of the user's fundus. In various versions, the system may include a semi-transparent mirror angled relative to the light source and camera to guide light emitted by the light source towards the targeted portion of the user's fundus. In various embodiments, the memory may store instructions that cause the one or more processors to: operate the light source and camera to capture two or more successive images of the targeted portion of the user's fundus that alternate between being coaxially illuminated and non-coaxially illuminated; and generate a composite image of the targeted portion of the user's fundus based on the two or more successive images. In various versions, memory may store instructions that cause the one or more processors to subtract one of the two or more successive images that is non-coaxially illuminated from another of the two or more successive images that is coaxially illuminated.

In various embodiments, the memory may store instructions that cause the one or more processors to: operate the light source to project a calibration light pattern onto the user's eye; detect a sharpness of the projected calibration light pattern from the user's eye; and cause the camera to capture one or more images of the user's fundus while the detected sharpness of the projected calibration light pattern satisfies a sharpness threshold.

In various embodiments, the target position of the user's fundus may be a first target position, and the memory may store instructions that cause the one or more processors to: identify a second portion of the user's fundus to be targeted by the camera; calculate a second target position on the planar surface that, when focused on by the user, causes the second identified portion of the user's fundus to be within the field of view of the camera; render a graphical element at the second target position on the planar surface; while the graphical element is rendered at the second target position, cause the camera to capture one or more images of the second target position of the user's fundus; and stitch together the one or more images of the first target position of the user's fundus with the one or more images of the second target position of the user's fundus to generate one or more composite images of the user's fundus.

In various embodiments, the memory may store instructions that cause the one or more processors to: cause the camera to capture one or more images of the user's skin simultaneously with capture of the one or more images of the targeted portion of the user's fundus; determine a momentary phase in a cardiac cycle of the user based on the captured one or more images of the user's skin; and cause the camera to capture one or more additional images of the target position of the user's fundus at a moment selected based at least in part on the determined momentary phase in the user's cardiac cycle.

As used herein, the term "smart mirror" refers to any assembly that includes a mirrored surface on which one or more graphical elements may be rendered. For example, in some embodiments, a two-way mirror may be placed in front of a display device so that graphics rendered on the display device are visible through the two-way mirror. In other embodiments, a "bathroom" television may be equipped with a reflective touchscreen that can be operated by a user to control content displayed on the screen. Smart mirrors may be used to display a variety of content, such as weather information, emails, texts, movies, and so forth— any content that would typically be displayed on a computer or a smart phone may similarly be displayed on a smart mirror. It should be appreciated that all combinations of the foregoing concepts and additional concepts discussed in greater detail below (provided such concepts are not mutually inconsistent) are contemplated as being part of the inventive subject matter disclosed herein. In particular, all combinations of claimed subject matter appearing at the end of this disclosure are contemplated as being part of the inventive subject matter disclosed herein. It should also be appreciated that terminology explicitly employed herein that also may appear in any disclosure incorporated by reference should be accorded a meaning most consistent with the particular concepts disclosed herein.

BRIEF DESCRIPTION OF THE DRAWINGS

In the drawings, like reference characters generally refer to the same parts throughout the different views. Also, the drawings are not necessarily to scale, emphasis instead generally being placed upon illustrating the principles of the disclosure.

Fig. 1 schematically illustrates an example fundus imaging system configured with selected aspects of the present disclosure, in accordance with various embodiments.

Fig. 2 depicts an example method for unobtrusive fundus imaging in accordance with various aspects of the present disclosure.

Fig. 3 depicts an example imaging processing technique that may be employed in accordance with various aspects of the present disclosure.

Fig. 4 schematically depicts an example of how a system configured with selected aspects of the present disclosure may handle lateral user movement.

Fig. 5 schematically illustrates another example fundus imaging system configured with selected aspects of the present disclosure, in accordance with various embodiments.

Fig. 6 schematically illustrates another example fundus imaging system configured with selected aspects of the present disclosure, in accordance with various embodiments.

DETAILED DESCRIPTION OF EMBODIMENTS

Various aspects of the human eye are known to reflect physiological changes to, or conditions of, other parts of human body. Traditionally, an ophthalmologist examines a patient's eye using an ophthalmoscope, which provides a direct view of a fragment of the retina under coaxial illumination from a built-in light source. However, because such examination is typically only performed during the occasional eye exam for reading glasses, or during performance of the Amsler test, patients' eyes are infrequently examined for retinal symptoms. Consequently, markers of various diseases that may be observable in patients' eyes often remain undetected. Moreover, current methods of retinal imaging (e.g., using an ophthalmoscope) are obtrusive and completely block the visual field of the examined eye. Thus, Applicants have recognized and appreciated that it would be beneficial to facilitate more frequent and less intrusive retinal monitoring. In view of the foregoing, various embodiments and implementations of the present disclosure are directed to techniques and systems for unobtrusive fundus imaging.

Referring to Fig. 1 , a system 100 configured with selected aspects of the present disclosure is depicted schematically in relation to an eye 102 of a user (not depicted). System 100 may include a variety of components coupled together via one or more wired or wireless data/electrical/communication pathways 104, such as one or more buses. These components may include, for instance, logic 106, memory 108, one or more speakers 109, a planar surface 110 (e.g., a display screen, television), one or more audio inputs 111, and one or more imaging devices such as a camera 112.

In some embodiments, logic 106 may include one or more processors configured to execute instructions stored in memory 108. In other embodiments, logic 106 may come in other forms, such as a field-programmable gate array ("FPGA") or an application-specific integrated circuit ("ASIC"). In various embodiments, logic 106 may be communicatively coupled with one or more remote computing devices (not depicted) via one or more networks 114. One or more networks 114 may include one or more local area networks, wide area networks (e.g., the Internet), one or more so-called "personal area networks" (e.g., Bluetooth), and so forth.

System 100 may also include other components which may or may not be operably coupled with logic 106. For example, in Fig. 1, system 100 includes a lens 115 positioned in front of camera 112, a light source 116, and a semi-transparent mirror 118 positioned adjacent the light source and in front of camera 112. Semi-transparent mirror 118 may be angled to reflect light 120 (which may be visible, infrared, ultraviolet, etc.) emitted by light source 116 as coaxial illumination along a field of view 122 of camera 112. While not depicted in Fig. 1, in some embodiments, light source 116 may be operably coupled with, and hence controllable by, logic 106.

In some embodiments, lens 115 may be omitted, and refraction of eye 102 may be directly projected onto an image sensor of camera 112. In other embodiments, lens 115 may be a microlens array as described in U.S. Patent No. 8,934,005 to De Bruijn et al. In yet other embodiments, camera 112 may be tilted inward to capture a larger part of the volume in front of the planar surface 110 (e.g., the space in a bathroom in front of the smart mirror). In some such embodiments, lens 115 may be corrective to ensure that a focal plane of camera 1 12 coincides with a plane 126 defined by the planar surface 110. Such an optical configuration to tilt the focus plane is sometimes referred to as the "Scheimpflug principle."

Planar surface 110 may take various forms that may be selected in order to cause a user to position themselves at a relatively predictable and/or fixed distance from planar surface 110. In some embodiments, planar surface 110 may take the form of a "smart mirror" that hangs, for instance, on a user's bathroom wall over the sink, and that is configured for rendition of graphical elements on a reflective surface facing the user. In that manner, the user may see, in addition to his or her own reflection, one or more graphical elements 124 (e.g., targets) on the mirror.

Additionally or alternatively, planar surface 110 may take the form of a display device (e.g., a computer monitor, flat-screen television, etc.) that lacks a reflective surface. For example, many office workers spend hours each day in front of a computer screen. These may present prime opportunities to obtain multiple images of users' eyes over any length of time. In yet other embodiments, planar surface 110 may be a touchscreen, e.g., of a tablet computer or smart phone. An example of such an embodiment is depicted in Fig. 5. In yet other embodiments, planar surface 110 may be a passive component such as a projection screen or simply a wall surface upon which one or more graphical elements may be projected. An example of such an embodiment is depicted in Fig. 6. In each of these embodiments, the user is positioned at some distance (e.g., more than several inches away from) from planar surface 110. This facilitates unobtrusive examination of the user's fundus, which can be done as a matter of routine, e.g., daily, weekly, etc. By contrast, traditional examination by a professional requires use of an ophthalmoscope, which either requires obtrusive contact with the user, or at the very least, requires the professional to hold a relay mirror at a particular position to obtain an image of the user's fundus.

In some embodiments, planar surface 110 may define a plane 126 that may serve as a shared focal plane of eye 102 and camera 112. In Fig. 1, for instance, camera 112 is adjusted so that it has a focal point 128 that lies on the plane 126. Similarly, eye 102 has been adjusted by the user so that its field of view 130 is focused on the graphical element 124, which also lies on plane 126. Consequently, both eye 102 and camera 112 share a common focal plane at 126. In this manner, a lens (not specifically indicated in Fig. 1) of eye 102 is properly adjusted so that field of view 122 of camera 112 is properly focused on, and is able to capture clear images of, a targeted posterior portion 132 of eye 102. Point 134 in Fig. 1 may represent the optic disc.

In various embodiments, system 100 may be configured to obtain, in an unobtrusive manner, one or more images of one or more selected portions of an interior (e.g., posterior) of eye 102. These one or more images may be used to diagnose and/or monitor various diseases, ailments, or other conditions of the user that are detectable based on one or observable attributes of eye 102. For example, and referring to Fig. 2 in conjunction with Fig. 1, in various embodiments, at block 202, logic 106 may be configured to adjust a focus setting of camera 112 so that a focal plane of camera 112 coincides with planar surface 110 (e.g., with plane 126).

Logic 106 may be further configured to identify, at block 204, a portion 132 of the user's fundus to be targeted by camera 112. For example, if it is desired to determine whether the user has diabetes, then portions of the user's eyes likely to exhibit diabetic retinopathy may be targeted. If it is desired to examine aspects of the user's blood circulation, then one or more retinal blood vessels may be targeted. The targeted retinal feature may be selected in various ways. In some embodiments, the user may select the retinal feature, e.g., in response to instructions from a doctor, by operating a computing device (or planar surface 110 itself if a touchscreen). In some embodiments, the user's doctor's office may have the ability to remotely instruct logic to target a specific feature of the user's fundus. In some embodiments, the targeted retinal feature may be selected based on one or more attributes of that retinal feature detected during routine monitoring, or based on one or attributes of other retinal features that may justify examination of the selected retinal feature.

Once the target portion of the user's eye to be imaged is identified at block 204, in some embodiments, a location of a retinal feature such as the user's pupil, e.g., within field of view 122 of camera 112, may be determined at block 206. At block 208, logic 106 may calculate a target position on planar surface 110 that, when focused on by the user, causes the targeted portion of the user's fundus to be within field of view 122 of camera 112. For example, logic 106 may calculate a position on planar surface 110 such that if the user gazes at that position, the targeted portion 132 of the user's fundus will be exposed to field of view 122 of camera 112. This calculation may be based on a variety of inputs, including but not limited to the position of the user's pupil within field of view 122 of camera 112. At block 210, logic 106 may cause a graphical element (e.g., 124) to be rendered at the target position on planar surface 1 10. Meanwhile, at block 212, logic 106 may cause camera 1 12 to capture one or more images of the targeted portion 132 of the user's fundus.

In some embodiments, a brief scanning procedure may be implemented to find a suitable retinal feature to serve as a starting point (or reference point). For example, a graphical element may be rendered on the planar surface such that the imaged area (i.e. the portion of the user's fundus captured within the camera's field of view) is on (or near) the optic disc. There may be a corresponding 'blind spot' that is approximately 15° temporally (i.e. "horizontally outward") and 1.5° downwards relative to the point of fixation. This means that an initial horizontal angle Θ (e.g., see Fig. 4) between the user's pupil and the camera may be Θ = 15°. Similarly, an initial vertical angle may be φ = 1.5°.

Traditionally, an ophthalmologist using an ophthalmoscope may find the optic disc by detecting a blood vessel in the field of view, and then following the vascular bifurcations in opposite direction, to quickly trace back his/her way towards the optic disc. A similar principle may be applied automatically by some embodiments of the present disclosure. For example, upon detection of a blood vessel by camera 1 12, logic 106 may change the position on planar surface 1 10 at which graphical element 124 is rendered, in order to quickly get the optic disc 134 in field of view 122 of camera 1 12.

In various embodiments, graphical element 124 may take a variety of forms. In some embodiments, graphical element 124 may portray a target such as the "X" depicted Fig. 1 that the user is overtly instructed to look at in order to obtain a proper reading of the user's eye. The user may be instructed to follow the target with her eyes using audio and/or visual output, such as an instruction rendered on planar surface 1 10 to "FOLLOW THE TARGET." In other embodiments, graphical element 124 may take other forms select to covertly attract the user's gaze. For example, graphical element 124 may portray a clock (e.g., a drawing of a clock and/or an LCD readout), an animated character (e.g., a bug, a smiley face, etc.), a weather indicator and/or icon (e.g., cloudy, chance of rain, temperature, etc.), a status of a personal care device of the user (e.g., "your electric toothbrush has 10% battery power remaining," or an image of a battery with a corresponding portion filled in), and/or a personal health status of the user (e.g., the user's weight if she is currently or has recently stepped on a scale, the user's temperature, etc.). In embodiments where planar surface 1 10 is a touchscreen— which may be the case, for instance, where system 100 includes a tablet computer or smart phone, or where planar surface 1 10 is a smart mirror with its reflective surface being a touchscreen as well— graphical element 124 may be portrayed as a user interface element such as a button or an actuable element of a video game. Referring back to Fig. 1, in various embodiments, logic 106 may cause graphical element 124 to be rendered at different locations, e.g., in a predetermined sequence, in order to expose different posterior portions of the user's eye to field of view 122 of camera 112. The predetermined sequence may be selected, for instance, so that the resulting sequence of digital images may be stitched together to generate a composite image and/or otherwise used to make various calculations for various diagnoses. In some embodiments, logic 106 may cause graphical element 124 to be rendered to hover around a single position for some predetermined amount of time. This may enable camera 112 to obtain multiple images that slightly overlap, which may facilitate correlation and/or stitching of those images into a larger composite image that is useful for various purposes.

As noted above, in some embodiments, logic 106 may calculate a position on planar surface 1 10 at which graphical element 124 should be rendered based at least in part on a detected location of various retinal features of eye 102, such as the pupil, within field of view 122 (i.e. within a camera frame) of camera 112. Once the retinal feature (also referred to herein as a "reference retinal feature") is located within field of view 122, logic 106 may calculate a position on planar surface 110 that, when focused on by the user, causes a desired portion of the user's fundus to be exposed to field of view 122 of camera 112. In this manner, system 100 may operate as a "closed loop" system. If the user's eye moves (e.g., due to lateral shift of the user), then the reference retinal feature will be detected at a new location within the visible fundus area captured by the camera and used to recalculate a new position on planar surface 110 at which to render graphical element 124. This may in turn lead to camera 112 capturing an image stream in which the reference retinal feature appears at a relatively stable position across frames. In addition to or instead of the pupil, in various embodiments, other retinal features may be used, such as the optic disc 134, vascular bifurcation, a specific artery or vein, and so forth. In various embodiments, the reference retinal feature may be selected to be relatively stable across frames, e.g., to facilitate time- resolved measurements such as a sequence of images illuminated at various wavelengths, and/or a sequence of images that depict a time-variant physiological process such as a user's pulse and/or related photoplethysmographic ("PPG") response.

In some embodiments, logic 106 may render graphical element 124 at a sequence of locations that each is selected based at least in part on previous locations on planar surface 110 at which graphical element 124 was rendered, and/or based on images previously captured by camera 112. For example, suppose system 100 is configured to monitor for a particular condition by obtaining images of a particular portion of the user's fundus over time. Logic 106 may keep track, e.g., in memory 108, of which positions at which graphical element 124 has been rendered, and may select new positions for rendition of graphical element 124 to target different posterior portions of eye 102. Additionally or alternatively, logic 106 may examine images recently captured by camera 112, e.g., over the span of several days, a week, a month, etc., and may identify posterior portions of eye 102 that need additional imaging.

Because light source 116 will typically be at a distance from the user, light captured by camera 112 may have a relatively low intensity. Consequently, various sources of noise (e.g., ambient light) may deteriorate the quality of raw images captured by camera 112. Accordingly, in various embodiments, image processing may be applied, e.g., by logic 106 or by another computing component, to cause various features of the posterior portion of eye 102 to become clearer. For example, retinal arteries and veins may become more clearly visible after image processing.

Fig. 3 depicts on example method 300 of performing image processing on images captured by camera 112. At block 302, correction may be made for any static disturbances of camera 112. Static disturbances may cause a spatial pattern of spurious pixel- value offsets that may be the same for every captured image, regardless of which portion of eye 102 is captured by camera 112. Such correction may be based on a calculation of an average noise/glare image using, for example, one hundred consecutive images under active coaxial illumination of a non-reflective black surface. A resulting image may combine the measurement of the following two imaging disturbances: a pixel value offset due to dark fixed-pattern noise of a complementary metal-oxide semiconductor ("CMOS") sensor employed as part of camera 112 (e.g., giving rise to a static pattern of colored vertical stripes); and a pixel value offset due to a glare of the coaxial illumination system due to internal reflections.

At block 304, a correction may be made for dynamic correlated-noise of the acquisition system. Such dynamic correlated noise may cause spurious correlated-noise signals that differ for every captured image, regardless of what retinal features are captured in the image. In various embodiments, the correction may be based at least in part on the correction of so-called clamp noise, a phenomenon common to analogue television giving rise to a similar image-wise disturbance. At block 306, a correction may be made for dynamic uncorrelated-noise.

At block 308, a correction may be made for glare caused by the image object. As noted above, features of the user's fundus may appear sharp when the user being tested focuses on the display plane (e.g., 126 in Fig. 1) that coincides with the camera focus plane. However, other features of the user, such as the user's face, may appear out of focus. Due to a strong de focus blur, anything in an image captured by camera 112 that is in the vicinity of the user's pupil may tend to bleed into the sharp image of the retina, potentially reducing image contrast. Accordingly, in various embodiments, logic 106 may operate light source 116 to provide (e.g., by way of semi-transparent mirror 118) alternating coaxial illumination towards the targeted portion 132 of the user's fundus. In particular, logic 106 may operate light source 116 and camera 112 to capture two or more successive images of targeted portion 132 of the user's fundus that alternate between being coaxially illuminated and non- coaxially illuminated. Then, logic 106 may generate a composite image of targeted portion 132 of the user's fundus based on the two or more successive images. For example, in some embodiments, logic may subtract one of the two or more successive images that is non- coaxially illuminated from another of the two or more successive images that is coaxially illuminated.

The resulting image may clearly depict the desired retinal features without the surrounding features that are not of interest. This clearly-depicted retinal feature may give rise to several benefits. In some embodiments, the clearly-depicted retinal feature may be a pupil that can be used for pupil location detection within a frame of camera 112. In some embodiments, the resulting image may also be relatively free of glare caused by facial structures in the vicinity of the pupil. This glare may be the cause of defocus blur, and so its removal may improve the contrast of the image.

In addition, when one or more features of the user's fundus appear in sharp focus, captured images taken over time may allow for comparison with prior image captures. For example, changes of the same feature over time may be followed (e.g., to detect the gradual onset of diabetes). Similar features (e.g., arteries, veins) may be considered members of a "class," and may be collected for a combined analysis. Features belonging to multiple classes may also be collected and used for various calculations. For example, in some embodiments, a determination may be made of the level of blood oxygenation of the user based on the specific absorption of Hb0 2 and Hb respectively, using the statistical average of the vessels classified as "arteries" in relation to the statistical average of the those classified as "veins."

Referring back to Fig. 3, at optional block 310, multiple image captured by camera 112 may be stitched together to generate a new composite image covering a wider retinal area. In other embodiments, captured images may be analyzed individually as needs vary.

In another aspect, logic 106 may be configured to perform various "calibration" operations to account for one or more observable parameters of the user. For example, in some embodiments, logic 106 may cause camera 112 to capture one or more images of the user's skin simultaneously with capture of one or more images of targeted portion 132 of the user's fundus. Logic 106 may then determine a momentary phase in a cardiac cycle of the user based on the captured one or more images of the user's skin (e.g., similar to a PPG signal). Logic 106 may then cause camera 112 to capture one or more additional images of the targeted portion 132 of the user's fundus at a moment selected based at least in part on the determined momentary phase in the user's cardiac cycle. Or, logic 106 may account for the momentary phase in the user's cardiac cycle when, for instance, logic 106 compares one or more attributes (e.g., vessel diameter) of a retinal feature with one or more thresholds. In this manner, any light absorption detected in the user's retinal arteries or veins may be corrected and/or calibrated to avoid spurious readings.

In a related aspect, the aforementioned captured momentary cardiac phase, recorded in association with a captured fundus image, may be used to generate a new composite image sequence. This new composite image sequence may cover a wider retinal area and collectively depict the effect of the blood flow during one single cardiac cycle. In various embodiments, there may be multiple captures of each portion of the fundus, each at a different cardiac phase. The generation of each phase-specific image in the sequence may result from interpolation between two or more of the phase-specific composing images captured at that approximate target phase.

In another related aspect, logic 106 may be configured determine whether the user has properly focused eye 102 on plane 126 before capturing images. This may be accomplished in various ways. In some embodiments, techniques described in U.S. Patent No. 8,934,005 to De Bruijn et al. may be employed. For example, logic 106 may to operate light source 116 to project a calibration light pattern (e.g., near infrared, or "NIR") onto eye 102. Logic 106 may then detect a sharpness of the projected calibration light pattern from eye 102. Logic 106 may then cause camera 112 to capture one or more images of the user's fundus while the detected sharpness of the projected calibration light pattern satisfies a sharpness threshold. If the projected light pattern is not sufficiently sharp, on the other hand, then the user has not properly focused on target plane 126, and consequently, no images may be captured. A user standing in front of a bathroom mirror or sitting in front of a computer screen is unlikely to remain absolutely stationary. Accordingly, in various embodiments, system 100 may be configured to adjust various parameters in response to a determination that a user has shifted position. In particular, logic 106 may be configured to detect a lateral shift by the user relative to planar surface 110. In response, logic 106 may calculate, based on the detected lateral shift, an updated target position on planar surface 110 that, when focused on by the user, causes targeted portion 132 of the user's fundus to be within field of view 122 of camera 112. Then, logic 106 may cause graphical element 124 to be rendered at the updated target position.

One example of how this may be achieved is illustrated schematically in Fig.

4, which is an overhead view of a user's eye 402 focusing on a target position Τοΐ a target plane 426 defined by a planar surface (not depicted in Fig. 4). The point C represents a position of a camera, and the point R represents a targeted posterior portion of eye 402 that is targeted by camera C. The user's pupil is represented by the point P.

The distance between the camera C and the eye 402 is indicated by z P . The lateral offset of eye 402 from camera C is indicated by x P . The distance between camera C and the target plane 426 is indicated by z T . The lateral offset of the target T from camera C is indicated by x T . Suppose the user moves laterally such that eye 402 and pupil P shifts (change of x P ) shifts from position P to position P This may cause a position PCF of pupil P in a camera frame 452 to shift to P'CF. The targeted portion R of the fundus will also shift, to R' when eye 402 shifts to the position indicated at 402'. To ensure that camera C observes the same targeted portion R' after lateral eye movement, the viewing target Jmay be shifted to T' in order to cause eye 402 to correspondingly rotate. Put another way, the triangle spanned in Fig. 4 by R, S, and P should not change shape, which also means that the angle Θ enclosed by the lines PT and PC should remain constant. This may be achieved, for instance, by moving the target Tto a new position T', so that (9' remains the same as Θ. In some embodiments, an equation such as the following may be used to calculate χτ (and hence, T'):

In other embodiments, techniques other than pupil location may be employed to account for lateral user movement. In some embodiments, logic 106 may employ head tracking techniques to detect when the user has shifted. For example, the blurred appearance of the user's face within the field of view 122 of camera 112 may be analyzed in combination with a model of a human face, e.g., stored in memory 108. Or, in some embodiments, a second camera may be employed to capture additional images that can be used to detect head movement.

Fig. 5 depicts an alternative embodiment of a system 500 configured with components that, for the most part, are similar to those depicted in Fig. 1 (except numbered "5XX" instead of "1XX"). Accordingly, most of the components will not be described again, and many are omitted from Fig. 5 altogether for the sake of clarity). However, system 500 differs from system 100 in at least one key aspect. In Fig. 5, planar surface 510 takes the form of a tablet computing device or smart phone that includes a touchscreen 560, and the camera takes the form of a front-facing camera 512. In this example, touchscreen 560 defines a plane 526 that, like plane 126 in Fig. 1, is meant to be used as a focal plane for an eye 502 of a user. To that end, logic (not depicted in Fig. 5, see 106 in Fig. 1) of the tablet computer or smart phone may cause a graphical element 524 to be rendered on touchscreen 560 at a location selected to cause eye to expose a targeted posterior portion 532 of eye 502 to be exposed to a field of view 522 of camera 512, much in the same way as was described previously.

Another difference between system 500 in Fig. 5 and system 100 in Fig. 1 is that camera 512 of Fig. 5 may be focused so that its focal plane 562 is artificially positioned behind the tablet computer/smart phone, e.g., on the opposite side of plane 526 as the user. This may be a build-in focusing capability of the camera or this may be achieved by placement of a focus correcting optical element in front of the camera. This may neutralize optics of eye 502 by moving the focal plane backwards, in turn facilitating clear imaging of posterior portions of the user's fundus when the user is relatively close to the camera 512 and focusing at a distance that coincides with focal plane 562, as would be the case with Fig. 5. While the front-facing camera 512 is depicted in Fig. 5, this is not meant to be limiting. In some instances, a rear-facing camera may be more powerful than a front facing camera. In some such instances, various types of optics (e.g., mirrors, casting video streams to other devices, etc.) may be employed to facilitate implementation of disclosed techniques with a rear facing camera.

Fig. 6 depicts an alternative embodiment of a system 600 configured with components that, for the most part, are similar to those depicted in Fig. 1 (except numbered "6XX" instead of "1XX"). Accordingly, most of the components will not be described again. However, system 600 differs from system 100 in at least one key aspect. In Fig. 6, planar surface 610 takes the form of a projection surface such as a projection screen or a blank wall. A projector 670 may be operably coupled with logic 606 so that logic 106 can perform operations similar to those described above, such as rendering graphical element 624 at various locations on the projection surface to cause eye 602 to look at those locations, exposing a targeted portion 632 of a posterior of a fundus of eye 602 to a field of view 622 of camera 612.

Images captured by the various cameras (e.g., 112, 512, 612) may be used by various medical personnel in various ways to diagnose and/or monitor various ailments and conditions. In some embodiments, logic (e.g., 106, 506, 606) may connect to a remote computing device over one or more networks (e.g., 114, 514, 614), so that captured images and/or data generated based on captured images may be accessed by medical professionals. In some embodiments, this transfer of data may take place only when certain criteria are met, such as upon image-wise coverage of sufficient retinal area, upon collection of a sufficient number measurements (e.g. in order to attain a desired statistical significance), upon reaching a desired signal to noise ratio (e.g. in order to sufficiently suppress acquisition noise), and/or when a characteristic retinal feature changes beyond given thresholds.

In another embodiment, a system may include a second semi-transparent mirror, behind which a second camera or other image sensor may be placed. This second image sensor may operate as a point-wise optical detector to perform a momentary integral measurement of what is in front of the camera. For example, in some embodiments, the second image sensor may be configured to capture images at a different focus distances, such as capturing a sharp image of the user's face and/or capturing a sharp image of the iris. In various embodiments, the appearance of the user's face and/or specific retinal features may be used as a means of personal identification, for instance with the aim to discriminate among multiple subjects using the same system.

In another embodiment of the disclosure, multiple cameras, each provided with coaxial illumination (e.g., using one or more light sources and semi-transparent mirrors), may be positioned around a periphery of a planar surface (e.g., 110, 510, 610). The multiple cameras may share the same focal plane. One of the cameras e.g. at the bottom edge of the planar surface, may capture images of the upper half of the user's ocular fundus. Another camera positioned at the top edge of the planar surface may capture images of the bottom half of the user's fundus. Similarly, cameras flanking the planar surface on either side may capture images of respective sides of the user's ocular fundus. Such an arrangement may facilitate capturing of images of the user's ocular fundus both to the left and right of the user's fovea.

While several inventive embodiments have been described and illustrated herein, those of ordinary skill in the art will readily envision a variety of other means and/or structures for performing the function and/or obtaining the results and/or one or more of the advantages described herein, and each of such variations and/or modifications is deemed to be within the scope of the inventive embodiments described herein. More generally, those skilled in the art will readily appreciate that all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the inventive teachings is/are used. Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, many equivalents to the specific inventive embodiments described herein. It is, therefore, to be understood that the foregoing embodiments are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, inventive embodiments may be practiced otherwise than as specifically described and claimed. Inventive embodiments of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials, kits, and/or methods, if such features, systems, articles, materials, kits, and/or methods are not mutually inconsistent, is included within the inventive scope of the present disclosure.

All definitions, as defined and used herein, should be understood to control over dictionary definitions, definitions in documents incorporated by reference, and/or ordinary meanings of the defined terms.

The indefinite articles "a" and "an," as used herein in the specification and in the claims, unless clearly indicated to the contrary, should be understood to mean "at least one."

The phrase "and/or," as used herein in the specification and in the claims, should be understood to mean "either or both" of the elements so conjoined, i.e., elements that are conjunctively present in some cases and disjunctively present in other cases.

Multiple elements listed with "and/or" should be construed in the same fashion, i.e., "one or more" of the elements so conjoined. Other elements may optionally be present other than the elements specifically identified by the "and/or" clause, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, a reference to "A and/or B", when used in conjunction with open-ended language such as "comprising" can refer, in one embodiment, to A only (optionally including elements other than B); in another embodiment, to B only (optionally including elements other than A); in yet another embodiment, to both A and B (optionally including other elements); etc.

As used herein in the specification and in the claims, "or" should be understood to have the same meaning as "and/or" as defined above. For example, when separating items in a list, "or" or "and/or" shall be interpreted as being inclusive, i.e., the inclusion of at least one, but also including more than one, of a number or list of elements, and, optionally, additional unlisted items. Only terms clearly indicated to the contrary, such as "only one of or "exactly one of," or, when used in the claims, "consisting of," will refer to the inclusion of exactly one element of a number or list of elements. In general, the term "or" as used herein shall only be interpreted as indicating exclusive alternatives (i.e. "one or the other but not both") when preceded by terms of exclusivity, such as "either," "one of," "only one of," or "exactly one of." "Consisting essentially of," when used in the claims, shall have its ordinary meaning as used in the field of patent law.

As used herein in the specification and in the claims, the phrase "at least one," in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements. This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase "at least one" refers, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, "at least one of A and B" (or, equivalently, "at least one of A or B," or, equivalently "at least one of A and/or B") can refer, in one embodiment, to at least one, optionally including more than one, A, with no B present (and optionally including elements other than B); in another embodiment, to at least one, optionally including more than one, B, with no A present (and optionally including elements other than A); in yet another embodiment, to at least one, optionally including more than one, A, and at least one, optionally including more than one, B (and optionally including other elements); etc.

It should also be understood that, unless clearly indicated to the contrary, in any methods claimed herein that include more than one step or act, the order of the steps or acts of the method is not necessarily limited to the order in which the steps or acts of the method are recited. In the claims, as well as in the specification above, all transitional phrases such as "comprising," "including," "carrying," "having," "containing," "involving," "holding," "composed of," and the like are to be understood to be open-ended, i.e., to mean including but not limited to. Only the transitional phrases "consisting of and "consisting essentially of shall be closed or semi-closed transitional phrases, respectively, as set forth in the United States Patent Office Manual of Patent Examining Procedures, Section 2111.03. It should be understood that certain expressions and reference signs used in the claims pursuant to Rule 6.2(b) of the Patent Cooperation Treaty ("PCT") do not limit the scope