Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEMS AND METHODS FOR MEASUREMENT OF RETINAL ACUITY
Document Type and Number:
WIPO Patent Application WO/2020/191292
Kind Code:
A1
Abstract:
[0084] Embodiments of the present disclosure provide systems and methods directed to measuring the visual acuity of a retina. In operation, a visual stimulus device may present a retina with a plurality of visual stimuli. A computing device may record the retinal visual stimulus responses in response to visual stimuli presented to the retina. The recorded visual stimulus responses are used to train a machine-learning model. The accuracy level of the machine-learning model is determined, where the accuracy level indicates the visual acuity of the retina. Using the indication of visual acuity of the retina, corrective treatment for vision impairment, including but limited to corrective eyewear, restorative chemical therapy, and/or corrective surgery may be recommended.

Inventors:
VAN GELDER RUSSELL N (US)
BABINO DARWIN (US)
BENSTER TYLER (US)
LAPRELL LAURA (US)
Application Number:
PCT/US2020/023858
Publication Date:
September 24, 2020
Filing Date:
March 20, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UNIV WASHINGTON (US)
VAN GELDER RUSSELL N (US)
BABINO DARWIN (US)
BENSTER TYLER (US)
LAPRELL LAURA (US)
International Classes:
A61B3/113; A61B3/028; G16B40/00
Domestic Patent References:
WO2018132483A12018-07-19
Foreign References:
US20170365101A12017-12-21
US20180310828A12018-11-01
US20170113046A12017-04-27
US20180355347A12018-12-13
US20160098528A12016-04-07
Attorney, Agent or Firm:
BARNARD, Jonathan et al. (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A method comprising:

presenting a plurality of visual stimuli to a retina;

for each visual stimulus in the plurality of visual stimuli presented to the retina, recording a visual stimulus response, such that a plurality of visual stimulus responses are recorded, wherein the visual stimulus response comprises an action potential spike;

training a machine-learning model using a training set of the visual stimulus responses; and

based on the training, determining an accuracy level of the machine-learning model using a testing set of the visual stimulus responses, wherein the accuracy level of the mac hine learning model indicates a visual acuity of the retina.

2. The method of claim 1, wherein each visual stimulus in the plurality of visual stimuli is one of an alternating checkerboard stimulus, a static contrast sensitivity gradient stimulus, a drifting contrast sensitivity gradient stimulus, an optotype stimulus, or other discrete and classifiable sets of digital or analogue projected images.

3. The method of claim 1, wherein the visual stimulus response is a spike in an action potential of the retina based at least in part on a visual stimulus presented from the plurality of visual stimuli.

4. The method of claim 1 , further comprising:

spike-sorting each visual stimulus response, wherein the sorting wherein the spike- sorting is based on distinguishing action potential spikes from background electrical noise.

5. The method of claim 1, further comprising:

for each visual stimulus in the plurality of visual stimuli presented to the retina, recording a corresponding time stamp value.

6. The method of claim 5, wherein the training set comprises a first portion of the visual stimulus responses and corresponding time stamp values.

7. The method of claim 6, wherein the testing set comprises a second portion of the visual stimulus responses and corresponding time stamp values.

8. The method of claim 5, wherein the time stamp value associates a visual stimulus presented from the plurality of visual stimuli to its respective visual stimulus response, including a start instance and an end instance of the visual stimulus.

9. The method of claim 1, wherein each visual stimulus in the plurality of visual stimuli includes metadata.

10. The method of claim 10, wherein the metadata is one of a spatial resolution information, stimulus class information, or condition information.

11. The method of claim 1 , wherein the machine-learning model is a radial basis function (RBF) kernel support vector machine (SVM) classifier.

12. The method of claim 1, wherein generating the training set and the testing set is based at least in part on using Mote Carlo Cross-Validation (MCCV).

13. The method of claim 1 , further comprising:

based on determining the accuracy level of the machine-learning model that indicates the visual acuity of the retina, recommending at least a corrective treatment for a vision impairment, the corrective treatment including at least one of a corrective eyewear recommendation, a restorative chemical therapy recommendation, or a corrective surgical recommendation.

14. The method of claim 1, further comprising:

based on determining the accuracy level of the machine-learning model that indicates the visual acuity of the retina, recommending a transformation of the plurality of visual stimuli that improves the visual acuity.

15. One or more computer storage media storing computer-useable instructions that, when executed by one or more computing devices, causes the one or more computing devices to perform operations, the operations comprising: receiving a plurality of visual stimulus responses, wherein each visual stimulus response is associated with a visual stimulus of a plurality of visual stimuli presented to a retina;

training a machine-learning model using a training set of the visual stimulus responses; and

based on the training, and using a testing set of the visual stimulus responses, determining an accuracy level of the machine-learning model, wherein the accuracy level of the machine learning model indicates a visual acuity of the retina.

16. The one or more computer storage media of claim 15, wherein each visual stimulus in the plurality of visual stimuli is one of an alternating checkerboard stimulus, a static contrast sensitivity gradient stimulus, a drifting contrast sensitivity gradient stimulus, an optotype stimulus, or other discrete and classifiable sets of digital or analogue projected images.

17. The one or more computer storage media of claim 15, wherein the visual stimulus response is an action potential spike.

18. The one or more computer storage media of claim 15, the operations further comprising:

spike-sorting each visual stimulus response, wherein the spike-sorting is based on distinguishing action potential spikes from background electrical noise.

19. The one or more computer storage media of claim 15, wherein the training set comprises a first portion of the visual stimulus responses, and wherein the testing set comprises a second portion of the visual stimulus responses. .

20. The one or more computer storage media of claim 15, the operations further comprising:

based on determining the accuracy level of the machine-learning model that indicates the visual acuity of the retina, recommending at least one corrective treatment for a vision impairment, the corrective treatment including at least one of a corrective eyewear recommendation, a restorative chemical therapy recommendation, or a corrective surgical recommendation.

21. A system comprising:

a visual stimulus device configured to:

present a plurality of visual stimuli to a retina; and

a computing device configured to:

for each visual stimulus in the plurality of visual stimuli presented to the retina, record a visual stimulus response such that a plurality of visual stimulus responses are recorded wherein the visual stimulus response comprises an action potential spike; train a machine-learning model using a training set of the visual stimulus responses; and

based on the training, determine an accuracy level of the machine-learning model using a testing set of the visual stimulus responses, wherein the accuracy level of the machine-learning model indicates a visual acuity of the retina.

22. The system of claim 21, wherein each visual stimulus in the plurality of visual stimuli is one of an alternating checkerboard stimulus, a static contrast sensitivity gradient stimulus, a drifting contrast sensitivity gradient stimulus, an optotype stimulus, or other discrete and classifiable sets of digital or analogue projected images.

23. The system of claim 21, wherein the visual stimulus response is a spike in an action potential of the retina based at least in part a visual stimulus presented from the plurality of visual stimuli.

24. The system of claim 21, wherein the machine-learning model is a radial basis function (RBF) kernel support vector machine (SVM) classifier, and wherein generating the training set and testing set is based at least in part on using Mote Carlo Cross-Validation (MCCV). .

25. The system of claim 21, wherein the computing device is further configured to:

based on determining the accuracy level of the machine-learning model that indicates the visual acuity of the retina, recommending at least a corrective treatment for a vision impairment, the corrective treatment including at least one of a corrective eyewear recommendation, a restorative chemical therapy recommendation, or a corrective surgical recommendation.

Description:
SYSTEMS AND METHODS FOR MEASUREMENT OF

RETINAL ACUITY

STATEMENT OF FEDERALLY SPONSORED RESEARCH AND DEVELOPMEMT

[0001] This invention was made with government support under R24 EY023937 and P30 EY001730 awarded by the National Institutes of Health. The government has certain rights in tire invention.

CROSS-REFERENCE TO RELATED APPLICATION

[0002] This application claims the benefit under 35 U.S.C. § 119 of the earlier filing date of U.S. Provisional Application Serial No. 62/821,044 filed March 20, 2019, the entire contents of which are hereby incorporated by reference in their entirety for any purpose.

TECHNICAL FIELD

[0003] Examples described herein generally relate to measuring visual (e.g., retinal) acuity. Examples of determining the visual acuity of a retina using visual stimuli and a machine-learning model are described.

BACKGROUND

[0004] Visual acuity (e.g., retinal acuity) describes the acuteness, sharpness, or clearness of vision, and is a common clinical measurement of overall visual function. To test for the overall function of the visual system in humans, visual acuity is often measured by resolution of optotypes (e.g., stylized letters, Landolt rings, etc.) on a standard eye chart. While such tests generally work for humans, behavioral tests may be used for other mammals to estimate visual acuity and overall visual system function.

[0005] Retinal ganglion cells (RGCs) are a type of photoreceptive projection neuron located in the inner surface of the retina of the eye that convey information from other photoreceptors (e.g., cones and rods) to the rest of the brain. When light (e.g., visual stimulation) hits the retina, it stimulates photoreceptors, creating an electrical signal that is conveyed through RGCs to the optic nerve and to the brain. Visual information transfer from RGCs to the brain relies at least in part on the action potential spiking of the neurons, as the response of the retina to the visual stimulation is encoded in the action potential spiking patterns of RGCs. Healthy RGCs generally result in proper vision function (e.g., proper visual acuity).

[0006] Impaired visual acuity (or vision impairment) is a serious and common health problem often caused by damage to the retina, and more specifically, to the RGCs in the ganglion cell layers of the retina. Such damage may be caused by, for example, refractive errors, cataracts, age-related macular degeneration, stroke or other neurological brain damage, prolonged eye stress, etc. Vision impairment may include, for example, loss of central vision, loss of peripheral vision, blurred vison, and even blindness. Impaired visual acuity may reduce overall quality of life and may limit the ability to ftmction or live independently.

[0007] The use of modem technologies and medical advancement has made the possibility of vision restoration, that is, the restoration of visual function by retinal repair, a possibility. Some examples of vision restoration include gene therapy, cell therapy, and artificial retinal stimulation. While these therapies may help treat various vision-related impairments, there still exists challenges in determining their overall effectiveness, especially while attempting to prevent further damage to vision function.

SUMMARY

[0008] Embodiments of the present invention relate to systems and methods for measuring retinal acuity (e.g., in vitro, in vivo). In operation, a visual stimulus device may present a plurality of visual stimuli to a retina. In some examples, the visual stimuli presented to the retina may include one of an alternating checkerboard stimulus, a static contrast sensitivity gradient stimulus, a drifting contrast sensitivity gradient stimulus, an optotype stimulus, or other discrete and classifiable sets of digital or analogue projected images.

[0009] A computing device may, for each visual stimulus in the plurality of visual stimuli presented to the retina, record a visual stimulus response, such that a plurality of visual stimulus responses are recorded, where the visual stimulus response comprises an action potential spike. In some examples, for each visual stimulus presented to the retina, the processor may record a time stamp value. The computing device may spike-sort each visual stimulus response, where the spike-sorting is based at least in part on distinguishing action potential spikes from background electrical noise.

[0010] The computing device may train a machine-learning model using a training set of the visual stimulus responses. The machine-learning model may be a radial basis function (RBF) kernel support vector machine (SVM) classifier. In some examples, the training set comprises a first portion of the visual stimulus responses and corresponding time stamp values. Generating the training set may be based at least in part on using Mote Carlo Cross- Validation (MCCV)

[0011] The computing device may, based on the training, determine an accuracy level of the machine-learning model using a testing set of the visual stimulus responses. In some examples, the computing device may determine the accuracy level of the machine-learning model using a testing set, where, in some examples, the testing set comprises a second portion of the visual stimulus responses and corresponding time stamp values. Generating the testing may be based at least in part on using Mote Carlo Cross-Validation (MCCV). In some examples, the accuracy level of the machine-learning model indicates a visual acuity of the retina.

[0012] In some examples, based on the indication of a visual acuity of the retina, the system may recommend at least a corrective treatment for a vision impairment, the corrective treatment including at least one of a corrective eyewear recommendation, a restorative chemical therapy recommendation, or a corrective surgical recommendation

[0013] In addition to the exemplary aspects and embodiments described above, further aspects and embodiments will become apparent by reference to the drawings and by study of the following description.

BRIEF DESCRIPTION OF THE DRAWINGS

[0014] Reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which: [0015] FIG. 1 is a schematic illustration of a system for measuring the visual acuity of a retina, arranged in accordance with examples described herein;

[0016] FIG. 2 illustrates a schematic illustration of using measured visual acuity results for vision correction, in accordance with examples described herein;

[0017] FIG. 3 illustrates a flowchart of a method for measuring visual acuity, in accordance with examples described herein; and

[0018] FIG. 4 is a flowchart of a method for measuring visual acuity, arranged in accordance with examples described herein.

DETAILED DESCRIPTION

[0019] The following description of certain embodiments is merely exemplary in nature and is in no way intended to limit the scope of the disclosure or its applications or uses. In the following detailed description of embodiments of the present systems and methods, reference is made to the accompanying drawings which form a part hereof, and which are shown by way of illustration specific to embodiments in which the described systems and methods may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice presently disclosed systems and methods, and it is to be understood that other embodiments may be utilized and that structural and logical changes may be made without departing from the spirit and scope of the disclosure. Moreover, for the purpose of clarity, detailed descriptions of certain features will not be discussed when they would be apparent to those with skill in the art so as not to obscure the description of embodiments of the disclosure. The following detailed description is therefore not to be taken in a limiting sense, and the scope of the disclosure is defined only by the appended claims.

[0020] Visual acuity describes the acuteness, sharpness, or clearness of vision, and is a common clinical measurement of overall visual function. Impaired visual acuity is a serious and common health problem often caused by damage to the retina that may result in degraded signaling for the RGCs in the ganglion cell layer. While advancement in medical technology has made the possibility of vision restoration after retinal damage a possibility, determining the overall effectiveness of various treatments remains a challenge. It would be useful to have a method for measuring visual acuity (e.g., in vitro , in vivo) without risking further vision impairment.

[0021] Various embodiments described herein are directed to systems and methods for improved measurement of retinal acuity. In some examples, retinal acuity may be measured in vitro. In other examples, retinal acuity may be measured in vivo. Examples described herein may enable machine-learning to associate trains of action potentials (e.g., action potential spikes) emanating from RGCs with specific stimuli. Further examples described provide that if a machine-learning model can accurately classify a particular stimulus from RGC spike trains, then the resolving power of the retina tested may have been sufficient for the same task. In particular, various embodiments record retinal visual stimulus responses in response to visual stimuli presented to a retina. The recorded visual stimulus responses are used to train a machine-learning model. The accuracy level of the machine-learning model is determined, where the accuracy level indicates the visual acuity of the retina. Using the indication of visual acuity of the retina, corrective treatment for vision impairment, including but not limited to corrective eyewear, restorative chemical therapy, and/or corrective surgery may be recommended.

[0022] Currently available visual acuity testing systems may suffer from a number of drawbacks. With respect to humans, visual acuity is often measured clinically (e.g., use of standardized charts). For example, a patient may be asked to read high-contrast, standardized optotypes letters subtending smaller angles of resolution until the patient can no longer correctly identify individual letters. While such acuity testing systems may appropriately assess visual acuity in adults, they lack universality and are age-restrictive. In other words, such visual acuity testing systems lack accuracy, reliability, and repeatability when used for infants and young children.

[0023] With respect to non-human mammals, many current visual acuity testing systems are measured behaviorally, and are inferential at best. For example, optokinetic reflex testing— where eye movement is elicited by the tracking of a moving field— is limited to specific stimuli, namely, a rotating drum. Visual acuity to non-moving stimuli cannot be estimated using this technique. Similarly learned associative testing, such as forced-choice maze navigation, requires repeated training of non-human mammals in an associative learning task. Here, non-human mammals vary in their ability to learn this association, and in many paradigms, are untrainable, making this technique an unviable option.

[0024] Additionally, many current visual acuity testing methods for non-human mammals require intact motor function and normal central nervous system function, which can create significant limitations. As a consequence, mutations resulting in neurologic or muscular deficits along with retinal dysfunction cannot be studied. Further, studying the effects of agents that influence vision and alertness simultaneously may not be possible. Even further, studying short-term vision restoration (e.g., with small molecule photoswitches), in which the duration of action of the agent is shorter than the period required for associative learning, may be impossible.

[0025] Accordingly, embodiments described herein are generally directed towards measuring retinal acuity in vitro (and/or in vivo). In this regard, embodiments described herein enable the measurement of retinal acuity by presenting visual stimuli to a retina, and for each stimulus presented, recording a corresponding visual stimulus response that comprises an action potential spike. Based at least in part on the recorded visual stimulus responses, a machine-learning model may be trained using a training set of the recorded visual stimulus responses. The accuracy level of the machine-learning model may be determined using a testing set of the recorded visual stimulus responses, where the accuracy level of the machine-learning model indicates the visual acuity of the retina tested.

[0026] In embodiments, a visual stimulus device may present a plurality of visual stimuli to a retina (e.g., a retina in vitro, a retina in vivo). In some embodiments, the visual stimulus presented to the retina is a checkerboard stimulus. In other embodiments, the visual stimulus presented is a gradient stimulus. In further embodiments, the visual stimulus presented is an optotypes stimulus. Each visual stimulus may further include metadata, such as spatial resolution information, stimulus class information, and/or condition (e.g., contrast) information. [0027] Examples of computing devices described herein may record, for each visual stimulus presented to the retina, a corresponding visual stimulus response, where each corresponding visual stimulus response includes an action potential spike (e.g., nerve impulse; spike). The visual stimulus response recorded is a spike in an action potential of the retina based at least in part on the visual stimulus presented to the retina. In some cases, for each visual stimulus presented to the retina, and for each visual stimulus response recorded in response to its corresponding visual stimulus, the computing device may additionally record a corresponding time stamp value that associates the visual stimulus presented to the retina to its corresponding visual stimulus response. The time stamp value may indicate a start instance and a stop instance that the corresponding visual stimulus was presented to the retina.

[0028] To record each visual stimulus response, in some cases a multi-electrode array (MEA) (also known as microelectrode arrays) may be used. As used herein, MEAs contain multiple (e.g., up to tens of thousands) of microelectrodes through which neural signals (e.g., action potential spikes) are obtained and/or delivered. Here, MEAs may serve as a neural interface between neurons (e.g., RGCs) and electronic circuitry. As can be appreciated, other mechanisms to record action potential spikes such as optical imaging of voltage-sensitive dye, calcium sensitive dye, genetically encoded calcium indicators, or genetically encoded voltage indicators may also be implemented, and use of MEAs is in no way limiting.

[0029] Using the visual stimulus responses, the computing device may train a machine- learning model to measure the visual acuity of the retina. In operation, to measure the visual acuity of the retina, the computing device may spike-sort each visual stimulus response. In some cases, the spike-sporting is based on distinguishing the recorded action potential spikes from additional background electrical noise also recorded. The computing device may further generate at least one training set that includes a first portion of the recorded visual stimulus responses and in some cases the corresponding time stamp values. In some cases, Mote Carlo Cross-Validation (e.g., rotation estimation; out-of-sample testing; repeated random sub-sampling) may be used to generate the training set(s). As can be appreciated, additional and/or alternative methods may be used to generate the training set(s), such as, for example, k-fold and holdout sampling may also be implemented.

[0030] Using the training set, the computing device may train the machine-learning model.

In some cases, the machine learning model may be a radial basis function (RBF) kernel support vector machine (SVM) classifier. In other cases, the machine-learning model may be a random forest (e.g., random decision forest) classifier. As can be appreciated, other machine-learning models used for classification and regression may also be implemented, such as, for example, additional unsupervised machine- learning models, supervised learning, clustering, dimensionality reduction, structured prediction, artificial neural network, convolutional neural networks, reinforcement learning, and the like.

[0031] The computing device may determine the accuracy level of the machine-learning model using a training set. In operation, the computing device may generate a training set(s) that includes a second portion of the recorded visual stimulus responses and in some cases the corresponding time stamp values. Similar to generating the training set(s), in some cases, Mote Carlo Cross-Validation (e.g., rotation estimation; out-of-sample testing; repeated random sub-sampling) may be used to generate the testing set(s). As can be appreciated, additional and/or alternative methods may be used to generate the testing set(s), such as, for example, k-fold and holdout sampling may also be implemented. Here, determining the classification accuracy level of the machine-learning model for the testing set indicates a visual acuity for the retina being tested (e.g., the retina being presented with the visual stimuli).

[0032] In some embodiments, based on determining the accuracy level of the machine- learning model that indicates the visual acuity of the retina being tested, the computing device may recommend at least one corrective treatment for a vision impairment. For example, in some cases the recommendation may include a recommendation for corrective eyewear. In other cases the recommendation may include a recommendation for restorative chemical therapy. In further cases, the recommendation may include a recommendation for corrective surgery. As can be appreciated, other corrective vision treatment recommendations not explicitly described herein are contemplated as possible recommendations provided by the computing device in response to determining the visual acuity of the retina being tested.

[0033] In some embodiments, the accuracy level of the machine-learning model may be used to guide the transformation of visual stimuli to optimize visual acuity. For example, a particular vision restoration technique may have low visual acuity in bright images. Tire machine- learning model can be used to compare visual acuity between raw images vs images preprocessed such that only the edges in the image (i.e. the outline of an object) are visible. In other cases, the model can be used to directly transform visual stimuli to directly optimize for discriminability, such as by backpropagating images through an artificial neural network to create maximally discernible images. As can be appreciated, other methods for using a machine learning model that indicates visual acuity not explicitly described herein are contemplated as possible techniques for transforming visual stimuli for improved visual acuity.

[0034] FIG. 1 is a schematic illustration of a system 100 (e.g., visual acuity measurement system) for measuring the visual acuity of a retina, in accordance with examples described herein. It should be understood that this and other arrangements and elements (e.g., machines, interfaces, function, orders, and groupings of functions, etc.) can be used in addition to or instead of those shown, and some elements may be omitted altogether. Further, many of the elements described herein are functional entities that may be implemented as discrete or distributed components or in conjunction with other components, and in any suitable combination and location. Various functions described herein as being performed by one or more components may be carried out by firmware, hardware, and/or software. For instance, and as described herein, various functions may be carried out by a processor executing instructions stored in memory.

[0035] Among other components not shown, system 100 of FIG. 1 includes data store 104, computing device 106, and visual stimulus device 108. Computing device 106 includes processor 114, and memory' 116. Memory 116 includes executable instructions for measuring visual acuity 118. It should be understood that system 100 shown if FIG. 1 is an example of one suitable architecture for implementing certain aspects of the present disclosure. Additional, fewer, and/or different components may be used in other examples. It should be noted that implementations of the present disclosure are equally applicable to other types of devices such as mobile computing devices and devices accepting gesture, touch, and/or voice input. Any and all such variations, and any combination thereof, are contemplated to be within the scope of implementations of the present disclosure. Further, although illustrated as separate components of computing device 106, any number of components can be used to perform the functionality described herein. Although illustrated as being a part of computing device 106, the components can be distributed via any number of devices. For example, processor 114 can be provided via one device, sever, or cluster of servers, while memory 116 may be provided via another device, server, or cluster or servers.

[0036] As shown in FIG. 1, computing device 106 and visual stimulus device 108 may communicate with each other via network 102, which may include, without limitation, one or more local area networks (LANs) and/or wide area networks (WANs). Such networking environments are commonplace in offices, enterprise-wide computer networks, laboratories, homes, intranets, and the Internet. Accordingly, network 102 is not further described herein. It should be understood that any number of computing devices and/or visual stimulus devices may be employed within system 100 within the scope of implementations of the present disclosure. Each may comprise a single device or multiple devices cooperating in a distributed environment. For instance, computing device 106 could be provided by multiple servicer devises collectively providing the functionality of computing device 106 as described herein. Additionally, other components not shown may also be included within the network environment.

[0037] Computing device 106 and visual stimulus device 108 have access (via network 102) to at least one data store or repository, such as data store 104, which includes data and metadata specific to a plurality of various stimuli, as well as the data and metadata associated with recorded visual stimulus responses. In implementations of the present disclosure, the data store is configured to be searchable for one or more of the visual stimulus to present to a retina, and/or one or more of the recorded visual stimulus responses. It should be understood that the information stored in data store 104 may include any information relevant to presenting visual stimuli, recording visual stimulus responses, and/or otherwise measuring the visual acuity of a retina. For example, data store 104 may include time stamp information corresponding to recorded visual stimulus responses as well as various types of stimuli including but not limited to checkerboard stimuli, gradient stimuli, and/or optotypes stimuli. Data store 104 may additionally include metadata associated with visual stimuli presented to a retina, including but not limited to spatial resolution information, stimulus class information, and/or condition (e.g., contrast) information.

[0038] Such information stored in data store 104 may be accessible to any component of system 100. The content and volume of such information are not intended to limit the scope of aspects of the present technology in any way. Further, data store 104 may be a single, independent component (as shown) or a plurality of storage devices, for instance, a database cluster, portions of which may reside in association with computing device 106, visual stimulus device 108, another external computing device (not shown), and/or any combination thereof. Additionally, data store 104 may include a plurality of unrelated data repositories or sources within the scope of embodiments of the present technology. Data store 104 may be updated at any time, including an increase and/or decrease in the amount and/or types of visual stimuli available to present to a retina (as well as all accompanying metadata), as well as an increase and/or decrease in the amount of recorded visual stimulus responses stored (as well as all accompanying metadata).

[0039] Examples of visual stimulus device 108 described herein may generally implement presentation of visual stimuli to a retina, such as retina 112, via optical path 110. Visual stimulus device 108 may include any optical device capable to presenting images and/or video to a retina, at various speeds, resolutions, contrasts, brightness levels, and the like, using various techniques, such as transparent lenses, lasers, etc. For example, visual stimulus device 108 may present a retina with various visual stimuli, including but not limited to a checkerboard stimuli, a gradient stimuli, and/or an optotypes stimuli each at various speeds, contrasts, resolutions, and the like. As should be appreciated, visual stimulus device 108 may be implemented using any number of optical devices, including but not limited to, a video projector, slide projector, overhead projector, handheld projector or any other handheld, mobile, tablet, or wireless device. Generally, visual stimulus device 108 may include software (e.g., one or more computer readable media encoded with executable instructions) and a processor that may execute the software to provide presentation of visual stimulus functionality.

[0040] Examples described herein may include computing devices, such as computing device 106 of FIG. 1. Computing device 106 may in some examples be integrated with one or more visual stimulus device(s) described herein. In some examples, computing device 106 may be implemented using one or more computers, servers, smart phones, smart devices, or tablets. Computing device 106 may measure the visual acuity of a retina being tested. As described herein, computing device 106 includes processor 114 and memory 116. Memory 116 includes executable instructions for measuring visual acuity 118 may be used to measure the visual acuity of a retina being tested (e.g., presented with visual stimuli), such as retina 112. In some embodiments, computing device 106 may be physically coupled to visual stimulus device 108. In other embodiments, computing device 106 may not be physically coupled to visual stimulus device 108 but collocated with the visual stimulus device. In even further embodiments, computing device 106 may neither be physically coupled to visual stimulus device 108 nor collocated with the visual stimulus device.

[0041] Computing devices, such as computing device 106 described herein may include one or more processors, such as processor 114. Any kind and/or number of processor may be present, including one or more central processing unit(s) (CPUs), graphics processing units (GPUs), other computer processors, mobile processors, digital signal processors (DSPs), microprocessors, computer chips, and/or processing units configured to execute machine-language instructions and process data, such as executable instructions for measuring visual acuity 118.

[0042] Computing devices, such as computing device 106, described herein may further include memory 116. Any type or kind of memory may be present (e.g., read only memory (ROM), random access memory (RAM), solid state drive (SSD), and secure digital card (SD card)). While a single box is depicted as memory 116, any number of memory devices may be present. The memory 116 may be in communication (e.g., electrically connected) to processor 114.

[0043] Memory 118 may store executable instructions for execution by the processor 114, such as executable instructions for measuring visual acuity 118. Processor 114, being communicatively coupled to visual stimulus device 108 and via the execution of executable instructions for measuring visual acuity, may determine the accuracy level of the machine- learning model which indicates the visual acuity of the retina being tested.

[0044] In operation, to measure the visual acuity of the retina being tested, processor 114 of computing device 106 may record, for each visual stimulus presented to the retina, such as retina 112, a corresponding visual stimulus response. The visual stimulus response recorded by processor 114 is a spike in an action potential of the retina based at least in part on the visual stimulus presented to the retina. In some cases, for each visual stimulus processor 114 of computing device 106 presents a retina, such as retina 112, and for each visual stimulus response recorded by processor 114 of computing device 106 in response to its corresponding visual stimulus, processor 114 may additionally record a corresponding time stamp value that associates the visual stimulus presented to the retina to its corresponding visual stimulus response. The time stamp value may indicate a start instance and a stop instance that processor 114 presented the corresponding visual stimulus to the retina.

[0045] As described herein, to record each visual stimulus response in response to the presentation of a visual stimulus by visual stimulus device 108, a multi-electrode array (MEA) may be used in some cases. As can be appreciated, in other cases, additional and/or alternative mechanisms to record action potential spikes in response to presentation of visual stimuli may also be implemented, and use of MEAs is in no way limiting.

[0046] Using the recorded visual stimulus responses, processor 114 of computing device 106 may train a machine-learning model (not shown) to measure the visual acuity of a retina being tested, such as retina 112. To measure the visual acuity of the retina, such as retina 112, processor 114 of computing device 106 may spike-sort each recorded visual stimulus response. In some instances, the spike-sporting is based on distinguishing the recorded action potential spikes for each visual stimulus presented from the background electrical noise additionally recorded.

[0047] Processor 114 of computing device 106 may generate at least one training set that includes a first portion of the recorded visual stimulus responses and, in some cases, corresponding time stamp values. To generate the training set(s), in some cases Mote Carlo Cross-Validation (e.g., rotation estimation; out-of-sample testing; repeated random sub- sampling) may be used to generate the training set(s). As described, additional and/or alternative methods may be used to generate the training set(s), such as, for example, k- fold and holdout sampling may also be implemented.

[0048] Using the training set, processor 114 of computing device 106 may train the machine-learning model (not shown). In some cases, the machine learning model may be a radial basis function (RBF) kernel support vector machine (SVM) classifier. In other cases, the machine-learning model may be a random forest (e.g., random decision forest) classifier. As described, other machine-learning models used for classification and regression may also be implemented, such as, for example, additional unsupervised machine- learning models, supervised learning, clustering, dimensionality reduction, structured prediction, artificial neural network, reinforcement learning, and the like.

[0049] Processor 114 of computing device 106 may determine the accuracy level of the machine-learning model (not shown) using a testing set. Processor 114 of computing device 106 may generate a training set(s) that includes a second portion of the recorded visual stimulus responses and in some cases the corresponding time stamp values. As described, and similar to generating the training set(s), in some cases, Mote Carlo Cross- Validation (e.g., rotation estimation; out-of-sample testing; repeated random sub- sampling) may be used to generate the testing set(s). As can be appreciated, additional and/or alternative methods may be used to generate the testing set(s), such as, for example, k-fold and holdout sampling may also be implemented. Here, determining the accuracy level of the machine-learning model indicates a visual acuity for the retina being tested, such as retina 112, (e.g., the retina being presented with the visual stimuli). [0050] In some embodiments, based on determining the accuracy level of the machine- learning model (not shown) that indicates the visual acuity of the retina being tested, such as retina 112, processor 114 of computing device 106 may recommend at least one corrective treatment for a vision impairment (not shown in FIG. 1; refer to FIG. 2). For example, in some cases the recommendation may include a recommendation for corrective eyewear. In other cases the recommendation may include a recommendation for restorative chemical therapy. In further cases, the recommendation may include a recommendation for corrective surgery. In even further cases, the recommendation may include a recommendation that no corrective treatment is required. As can be appreciated, other corrective vision treatment recommendations not explicitly described herein are contemplated as possible recommendations provided by the computing device in response to determining the visual acuity of the retina being tested.

[0051] Turing now to FIG. 2, FIG. 2 illustrates a schematic illustration of using measured visual acuity results for vision correction, in accordance with examples described herein. FIG. 2 includes vision impairment (e.g., retinal damage) block 202, visual acuity measurement block 204, recommendation block 206, and recommendation type blocks

208a-208c.

[0052] In examples described herein, vision impairment (e.g., retinal damage) block 202 may include various types and various degrees of vision loss. For example, vision impairment block 202 may include loss of central vision, loss of peripheral vision, blurred vision, generalized haze, extreme light sensitivity, night blindness, etc.

[0053] To measure the visual acuity of a retina, such as a damaged retina from visual impairment block 202, a visual acuity measurement system may be used, such as visual acuity measurement system and methods described herein, of visual acuity measurement block 204. Here, retinal acuity may be measured by presenting visual stimuli to the retina, and for each stimulus presented, recording a corresponding visual stimulus response that comprises an action potential spike. Based at least in part on the recorded visual stimulus responses, a machine-learning model may be trained using a training set of the recorded visual stimulus responses. The accuracy level of the machine-learning model may be determined using a testing set of the recorded visual stimulus responses, where the accuracy level of the machine-learning model indicates the visual acuity of the retina tested.

[0054] Based on determining the accuracy level of the machine- learning model that indicates the visual acuity of the retina being tested, and as can be seen at recommendation block 206, the visual acuity measurement system recommends at least one corrective treatment for a vision impairment. Examples of corrective treatment recommendations may include a restorative chemical therapy recommendation as seen at block 208a, a corrective eyewear recommendation as seen at block 208b, and/or a corrective surgical recommendation as seen at block 208c. As should be appreciated, other corrective recommendations may be recommended based on determining the retinal acuity of the retina being tested. In some cases, no recommendation may be recommended. In other cases, one recommendation may be recommended. In further cases, more than one recommendation may be recommended. A user (e.g., a medical service provider) may take action based on the recommendations such as by administering a treatment based on the recommendation.

[0055] For example, systems described herein may be used to evaluate the effect of a drug or another substance on a retina. In some cases, the drug may cause retinal impairment and/or damage, where the level of impairment and/or damage is to be evaluated. In other cases, the drug may reconstitute a previously damaged and/or impaired retina, where the level of reconstitution is to be evaluated. The retina’s visual acuity may be assessed as described herein in the presence of the drug or other substance (e.g., by applying the drug or other substance to the retina). In some examples, the retina’s visual acuity may additionally be assessed as described herein in the absence of the drug or other substance. If the visual acuity of the retina is unacceptable in the presence of the drug or other substance, the system may recommend against using the drug or other substance. If the drug or other substance provides acceptable (or improved) visual acuity (e.g., such as in the case of retinal reconstitution), the system may recommend using the drug or other substance. A user or medical service provider may administer the drug or other substance to patients responsive to the recommendation. [0056] In some examples, a corrective eyewear recommendation may be made. For example, the retina’s visual acuity may be determined to be acceptable to visualize edges, but may be poor responsive to other patterns. Accordingly, corrective eyewear may be recommended which may accentuate edges in the visual field to assist in detection by the retina. Patients having retinas similar to those tested may accordingly wear the corrective eyewear responsive to the recommendation.

[0057] FIG. 3 is a flowchart of a method arranged in accordance with examples described herein. The method 300 may be implemented, for exampl e, using system 100 of FIG. 1.

[0058] The method 300 includes presenting a plurality of visual stimuli to a retina in block 302, for each visual stimulus in the plurality of visual stimuli presented to the retina, recording a visual stimulus response, such that a plurality of visual stimulus responses are recorded, wherein the visual stimulus response comprises an action potential spike in block 304, training a machine-learning model using a training set of the visual stimulus responses in block 306, and based on the training, determining an accuracy level of the machine- learning model using a testing set of the visual stimulus responses, wherein the accuracy level of the machine learning model indicates a visual acuity of the retina in block 308.

[0059] Block 302 recites presenting a plurality of visual stimuli to a retina. In one embodiment, the presented visual stimulus may be a checkerboard stimulus. In a further embodiment, the presented visual stimulus may be a gradient stimulus. In an even further embodiment, the presented visual stimulus may be an optotypes stimulus. Each stimulus presented to a retina may further vary in resolution, speed, contrast, orientation, brightness, and the like.

[0060] Block 304 recites for each visual stimulus in the plurality of visual stimuli presented to the retina, recording a visual stimulus response, such that a plurality of visual stimulus responses are recorded, wdierein the visual stimulus response comprises an action potential spike. As described herein, the visual stimulus response recorded is a spike in an action potential of the retina based at least in part on the visual stimulus presented to the retina. In some cases, for each visual stimulus presented to the retina, and for each visual stimulus response recorded in response to its corresponding visual stimulus, the computing device may additionally record a corresponding time stamp value that associates the visual stimulus presented to the retina to its corresponding visual stimulus response. The time stamp value may indicate a start instance and a stop instance that the corresponding visual stimulus was presented to the retina. In some cases, to record each visual stimulus response, a multi-electrode array (MEA) (also known as microelectrode arrays) may be used.

[0061] Block 306 recites training a machine-learning model using a training set of the visual stimulus responses. To generate the training set(s), in some cases Mote Carlo Cross- Validation (e.g., rotation estimation; out-of-sample testing; repeated random sub- sampling) may be used to generate the training set(s). As described, additional and/or alternative methods may be used to generate the training set(s), such as, for example, k- fold and holdout sampling may also be implemented. In some cases, the machine learning model may be a radial basis function (RBF) kernel support vector machine (SVM) classifier. In other cases, the machine-learning model may be a random forest (e.g., random decision forest) classifier.

[0062] Block 308 recites based on the training, determining an accuracy level of the machine-learning model using a testing set of the visual stimulus responses, wherein the accuracy level of the machine learning model indicates a visual acuity of the retina. To generate the testing set(s), in some cases, Mote Carlo Cross-Validation (e.g., rotation estimation; out-of-sample testing; repeated random sub-sampling) may be used. As can be appreciated, additional and/or alternative methods may be used to generate the testing set(s), such as, for example, k-fold and holdout sampling may also be implemented. Determining the accuracy level of the machine- learning model indicates a visual acuity for the retina being tested. In some embodiments, based on determining the accuracy level of the machine-learning model (not shown) that indicates the visual acuity of the retina being tested.

[0063] FIG. 4 is a flowchart of a method 400 arranged in accordance with examples described herein. The method 400 may be implemented, for example, using the system 100 of FIG. 1. [0064] The method 400 includes receiving a plurality of visual stimulus responses, wherein each visual stimulus response is associated with a visual stimulus of a plurality of visual stimuli presented to a retina in block 402, training a machine-learning model using a training set of the visual stimulus responses in block 404, and based on the training, and using a testing set of the visual stimulus responses, determining an accuracy level of the machine-learning model, wherein the accuracy level of the machine learning model indicates a visual acuity of the retina in block 406.

[0065] Block 402 recites receiving a plurality of visual stimulus responses, wherein each visual stimulus response is associated with a visual stimulus of a plurality of visual stimuli presented to a retina. The visual stimulus response received may be a spike in an action potential of the retina based at least in part on the visual stimulus presented to the retina. In some cases, for each visual stimulus presented to the retina, and for each visual stimulus response recorded and received in response to its corresponding visual stimulus, a corresponding time stamp value that associates the visual stimulus presented to the retina to its corresponding visual stimulus response may be recorded and received.

[0066] Block 404 recites training a machine-learning model using a training set of the visual stimulus responses. As described herein, to generate the training set(s), in some cases Mote Carlo Cross-Validation (e.g., rotation estimation; out-of-sample testing; repeated random sub-sampling) may be used to generate the training set(s). As described, additional and/or alternative methods may be used to generate the training set(s), such as, for example, k-fold and holdout sampling may also be implemented. In some cases, the machine learning model may be a radial basis function (RBF) kernel support vector machine (SVM) classifier. In other cases, the machine-learning model may be a random forest (e.g., random decision forest) classifier.

[0067] Block 406 recites based on the training, and using a testing set of the visual stimulus responses, determining an accuracy level of the machine-learning model, wherein the accuracy level of the machine learning model indicates a visual acuity of the retina. As described herein, to generate the testing set(s), in some cases, Mote Carlo Cross- Validation (e.g., rotation estimation; out-of-sample testing; repeated random sub- sampling) may be used. As can be appreciated, additional and/or alternative methods may be used to generate the testing set(s), such as, for example, /k-fold and holdout sampling may also be implemented. Determining the accuracy level of the machine- learning model indicates a visual acuity for the retina being tested. In some embodiments, based on determining the accuracy level of the machine-learning model (not shown) that indicates the visual acuity of the retina being tested.

[0068] From the foregoing it will be appreciated that, although specific embodiments of the invention have been described herein for purposes of illustration, various modifications may be made without deviating from the spirit and scope of the invention.

[0069] The particulars shown herein are by way of example and for purposes of illustrative discussion of the preferred embodiments of the present invention.

[0070] Unless the context clearly requires otherwise, throughout the description and the claims, the words "comprise’, "comprising’, and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is to say, in the sense of “including, but not limited to”. Words using the singular or plural number also include the plural and singular number, respectively. Additionally, the words“herein,”“above,” and “below'” and words of similar import, when used in this application, shall refer to this application as a whole and not to any particular portions of the application.

[0071] Of course, it is to be appreciated that any one of the examples, embodiments or processes described herein may be combined with one or more other examples, embodiments and/or processes or be separated and/or performed amongst separate devices or device portions in accordance with the present systems, devices and methods.

[0072] Finally, the above-discussion is intended to be merely illustrative of the present system and should not be construed as limiting the appended claims to any particular embodiment or group of embodiments. Thus, while the present system has been described in particular detail with reference to exemplary embodiments, it should also be appreciated that numerous modifications and alternative embodiments may be devised by those having ordinary skill in the art without departing from the broader and intended spirit and scope of the present system as set forth in the claims that follow. Accordingly, the specification and drawings are to be regarded in an illustrative manner and are not intended to limit the scope of the appended claims.

IMPLEMENTED EXAMPLES

Animals

[0073] Examples of methods described herein were implemented on non-human mammals

(e.g., mice). C57BL/6J (wt) and C3H/HeJ (Pde6b rd1 / rd1 ) mice between 4-12 months old were used in this study. Both male and female mice were incorporated into the studies randomly. All non-human mammals were treated in accordance with the ARVO Statement for the Use of Animals in Ophthalmic and Vision Research and all animal experiments were conducted following approved protocols by the institutional animal care and use committee at the University of Washington.

Multi-Electrode Array (MEA) Recordings

[0074] Euthanasia of all mice was performed using CO 2 and their eyes were quickly enucleated and placed into room-temperature ACSF all under dim red light. The ACSF solution contained 125 mM NaCl, 2.5 mM KC1, 1.25 mM NaH2PO4, 1 mM MgCl2, 2 mM CaCl2, 26 mM NaHCO3, and 20 mM D-glucose and aerated with 95% O 2 /5% CO 2 . Isolated retinas were placed whole mount with retinal quadrant-specific sections, retinal ganglion cell layer facing down, onto a 60-channel MEA (60MEA 200/30iR-Ti, Multi Channel Systems, Reutlingen, Germany) and recorded on a MEA 1060-inv-BC system. Retinas were kept at 34° C and continuously perfused with ACSF at 3 mL/ nin and allowed to settle and recover for 1 h prior to recordings. Processing of extracellular spikes was performed using Offline Sorter V3 (Plexon, Dallas, TX, USA); spikes were high-pass filtered at 300 Hz and digitized at 20 kHz with a spike threshold setting of 5 SD for each channel.

Drug Treatments

[0075] DNQX (50 mM) and APS (25 mM) in ACSF were administered under continuous perfusion 1 hour before and during visual stimulation. Washout procedures consisted of 1 hour perfusion with ACSF. BENAQ (100 mM) in a 10% cyclodextrin formulation was administered for 1 hour under perfusion before light stimulation testing.

Stimulator

[0076] A DLP LightC rafter 4500 (Texas Instruments) coupled to a custom optical, two- lens system facilitated focusing light stimulation onto the retina from below. The system provided 1280x800 pixels of spatiotemporally patterned stimulation over the area of the MEA with a refresh rate of 60 Hz and control of brightness and simultaneous RGB LED operation. The DLP projector is outfitted with 3 independently controlled RGB LEDs with a recorded maximum emission at 617, 520 and 450 nm for red, green and blue respectively with an average range of ± 30 nm. For all experiments, regardless of isolated or compound use of LEDs, a total photon flux of ~3.5e+13 photons/cm 2 *s was used as measured for each maximal emission per LED. For white light stimulations, equivalent photon flux per LED was used.

Stimuli

[0077] Each experimental stimulus lasted 1 s and was preceded by 1 s of darkness followed by a uniform draw of 1-1.5 s seconds of darkness to limit adaptation. For monitoring the integrity of the retina, each recording began with 1 s darkness, 1 s full-field flash, and 1- 1.5 s darkness. These stimuli were repeated every 5 min.

[0078] In a single recording, either (1) a checkerboard protocol, where the positive stimuli displayed a checkerboard pattern where the color of each square swapped at 0.5 s and the negative stimuli maintained a static pattern for the entire 1s, or (2) a square grating protocol, where the positive stimuli moved up and right for Is and the negative stimuli moved down and left for 1s was used. Both checkerboards and gratings were oriented at 45 degrees to match the diamond pixel arrangement of our DMD.

[0079] In a typical recording, 25 examples of each class for each of 6 resolutions for each of 7 stimulus conditions (i.e. contrast) for a total of 2100 stimuli presented in ~2 hrs., depending on the random darkness times, were displayed (e.g., presented). The stimuli were presented in random order for each run.

[0080] For the ETDRS letter experiments, a similar protocol was used wherein each stimulus came from one of ten letter classes (C, D, H, K, N, O, R, S, V, or Z) using the Sloan font.

Feature Extraction and Cross-Validation

[0081] First, a featurized representation of the RGC responses to each 1 s checkerboard or grating stimuli was constructed by counting the number of spikes per unit per 100 ms time bins and flattened these counts into a one-dimension feature vector. The target for each example was either 0 or 1, corresponding to either la) two identical checkerboards or lb) two alternating checkerboards, 2a) drifting grating to the left or 2b) drifting grating to the right; or one of corresponding to each of the ten letters. The 6 sizes x 7 contrasts x 25 repetitions x 2 classes was split into 42 groups of 50 samples. Using Monte Carlo Cross- Validation (MCCV), the 50 samples were randomly partitioned into 30 training examples that were used to train a radial basis function (RBF) kernel support vector machine (herein referred to as SVM) using a one-against-all strategy for generalization to k-classes and 20 withheld examples used to test the accuracy of the SVM. This partition process was repeated 30 times to construct a sample mean from the accuracy of the SVM on withheld data.

Radial Basis Function (RBF) Kernel Support Vector Machine (VSM) Classifier

[0082] For each MCCV draw, a grid search on two RBF kernel SVM hyper parameters, C, a regularization parameter against the decision function’s margin, and gamma, a parameter adjusting the radius of influence of each example, was performed. 15 SVM models were trained for each draw of training examples, and the best-performing model, as measured by accuracy on training data, was chosen for evaluation on the withheld test data. Thus, for each group that was trained, the number of draws x models per grid search equaled 450 SVM models. For a typical recording, 18,900 unique SVM models were trained.

Acuity Threshold [0083] A neural code is capable of transmitting information at a particular spatial acuity if a receiver is capable of recovering the signal at a low error rate. Therefore, the acuity threshold was defined as the highest value of cycles/degree for which the mean accuracy level outperformed random chance at a 1% or lower significance level. Contrast sensitivity was similarly defined as the largest value of 1 /contrast for a given spatial acuity that outperformed random chance at a 1% or lower significance level.