Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR ASSESSING VISUAL FUNCTION USING VIRTUAL MOBILITY TESTS
Document Type and Number:
WIPO Patent Application WO/2023/172768
Kind Code:
A1
Abstract:
Methods, systems, and computer readable media for assessing visual function using virtual mobility tests are disclosed. One system includes a processor and a memory. The system is configured for: providing, via a display, a virtual mobility test in a virtual environment for testing visual function of a user; displaying, during the virtual mobility test, virtual objects for intentional tagging by the user; counting, during or after the virtual mobility test, a number of tagged virtual objects tagged by the user; and assessing the visual function of the user using the count of tagged virtual objects tagged by the user during the virtual mobility test.

Inventors:
BENNETT JEAN (US)
ALEMAN TOMAS (US)
ALEMAN ELENA (US)
MAGUIRE KATHERINE (US)
MAGUIRE WILLIAM (US)
MILLER ALEXANDER (US)
BENNETT NANCY (US)
Application Number:
PCT/US2023/015066
Publication Date:
September 14, 2023
Filing Date:
March 13, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UNIV PENNSYLVANIA (US)
International Classes:
A61B3/032; G06F3/01; A63F13/00
Domestic Patent References:
WO2020221997A12020-11-05
Foreign References:
US20210259539A12021-08-26
US20220013228A12022-01-13
Attorney, Agent or Firm:
HUNT, Gregory, A. (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1 . A system comprising: at least one processor; and a memory, wherein the system is configured for: providing, via a display, a virtual mobility test in a virtual environment for testing visual function of a user; displaying, during the virtual mobility test, virtual objects for intentional tagging by the user; counting, during or after the virtual mobility test, a number of tagged virtual objects tagged by the user; and assessing the visual function of the user using the count of tagged virtual objects tagged by the user during the virtual mobility test.

2. The system of claim 1 wherein counting the number of tagged virtual objects includes sorting and/or analyzing the tagged virtual objects and untagged virtual objects by size, shape, location and/or other characteristics.

3. The system of claim 1 wherein each of the tagged virtual objects is tagged via a tagging procedure, wherein the tagging procedure includes interacting, by the user or a virtual avatar of the user, with an untagged virtual object in the virtual environment and providing input via a physical input device.

4. The system of claim 3 wherein interacting with the untagged virtual object includes touching the untagged virtual object, pointing at the untagged virtual object, or identifying the untagged virtual object.

5. The system of claim 3 wherein providing the input includes pressing a button, pressing a sequence of buttons, inputting identification information, and/or moving a joystick or directional input control; and wherein the physical input device includes a game controller, a remote controller, a keyboard, a wireless handheld device, a wired handheld device, or a button device.

6. The system of claim 1 wherein the display includes an immersive or interactive display system and wherein the virtual environment includes an extended reality (XR) environment, an augmented reality (AR) environment, a mixed reality (MR) environment, or a virtual reality (VR) environment; or wherein the system is configured for providing auditory or haptic feedback to the user when a feedback condition occurs, wherein the feedback condition includes a successful tagging procedure or an unsuccessful tagging procedure.

7. The system of claim 1 wherein assessing the visual function of the user using the count of tagged virtual objects includes weighting the count of tagged virtual objects or weighting each of the tagged virtual objects based on environmental attributes, wherein the environmental attributes includes luminance, shadow, color, contrast, gradients of contrast or color on the surface of one or more of the tagged virtual objects, reflectance or color of borders of one or more of the tagged virtual objects, or a lighting condition associated with one or more of the tagged virtual objects, a height of one or more of the tagged virtual objects, a size of one or more of the tagged virtual objects, or a motion or speed of one or more of the tagged virtual objects.

8. The system of claim 1 wherein assessing the visual function of the user using the count of tagged virtual objects includes: comparing the count of tagged virtual objects to a second count of tagged virtual objects associated with a person or a population that has or does not have a vision condition; or comparing a computed score associated with the count of tagged virtual objects to a second computed score associated with a second count of tagged virtual objects associated with a person or a population that has or does have not a vision condition.

9. The system of claim 1 wherein assessing the visual function of the user includes. assessing the user’s ability to tag the virtual objects in a proper or predetermined sequence during the virtual mobility test; assessing an amount of time that it takes for the user to recognize one or more configuration parameters of the virtual mobility test, wherein the one or more configuration parameters include a start indicator, a start location, or a direction of a path of the virtual mobility test; or assessing the user’s ability to perform a plurality of visual tasks concurrently, wherein a first task of the plurality of visual tasks includes following a path of the virtual mobility test and a second task of the plurality of visual tasks includes tagging the virtual objects.

10. A method, the method comprising: providing, via a display, a virtual mobility test in a virtual environment for testing visual function of a user; displaying, during the virtual mobility test, virtual objects for intentional tagging by the user; counting, during or after the virtual mobility test, a number of tagged virtual objects tagged by the user; and assessing the visual function of the user using the count of tagged virtual objects tagged by the user during the virtual mobility test.

11 . The method of claim 10 wherein counting the number of tagged virtual objects includes sorting and/or analyzing the tagged virtual objects and untagged virtual objects of the virtual mobility test by size, shape, location, and/or other characteristics.

12. The method of claim 10 wherein each of the tagged virtual objects is tagged via a tagging procedure, wherein the tagging procedure includes interacting, by the user or a virtual avatar of the user, with an untagged virtual object in the virtual environment and providing input via a physical input device.

13. The method of claim 12 wherein the user or the virtual avatar of the user interacts with the untagged virtual object by touching the virtual object, pointing at the virtual object, or identifying the virtual object.

14. The method of claim 12 wherein providing the input includes pressing a button, pressing a sequence of buttons, inputting identification information, and/or moving a joystick or directional input control; and wherein the physical input device includes a game controller, a remote controller, a keyboard, a wireless handheld device, a wired handheld device, or a button device. The method of claim 10 wherein the display includes an immersive or interactive display system and wherein the virtual environment includes an extended reality (XR) environment, an augmented reality (AR) environment, a mixed reality (MR) environment, or a virtual reality (VR) environment; or wherein the system is configured for providing auditory or haptic feedback to the user when a feedback condition occurs, wherein the feedback condition includes a successful tagging procedure or an unsuccessful tagging procedure. The method of claim 10 wherein assessing the visual function of the user using the count of tagged virtual objects includes weighting the count of tagged virtual objects or weighting each of the tagged virtual objects based on environmental attributes, wherein the environmental attributes includes luminance, shadow, color, contrast, or gradients of contrast or color on the surface of one or more of the tagged virtual objects, reflectance or color of borders of one or more of the tagged virtual objects, or a lighting condition associated with one or more of the tagged virtual objects, a height of one or more of the tagged virtual objects, a size of one or more of the tagged virtual objects, or a motion or speed of one or more of the tagged virtual objects. The method of claim 10 wherein assessing the visual function of the user using the count of tagged virtual objects includes: comparing the count of tagged virtual objects to a second count of tagged virtual objects associated with a person or a population that has or does not have a vision condition; or comparing a computed score associated with the count of tagged virtual objects to a second computed score associated with a second count of tagged virtual objects associated with a person or a population that has or does have not a vision condition.

18. The method of claim 10 wherein assessing the visual function of the user includes: assessing the user’s ability to tag the virtual objects in a proper or predetermined sequence during the virtual mobility test; assessing an amount of time that it takes for the user to recognize one or more configuration parameters of the virtual mobility test, wherein the one or more configuration parameters include a start indicator, a start location, or a direction of a path of the virtual mobility test; or assessing the user’s ability to perform a plurality of visual tasks concurrently, wherein a first task of the plurality of visual tasks includes following a path of the virtual mobility test and a second task of the plurality of visual tasks includes tagging the virtual objects.

19. The method of claim 18 wherein each of the plurality of visual tasks includes one or more independent variables that affect at least one visual element associated with the respective visual task, wherein the independent variables affect size, shape, location, color, luminance, shadow, color, contrast, light intensity, or reflectivity of the at least one visual element.

20. A non-transitory computer readable medium having stored thereon executable instructions that when executed by at least one processor of a computer cause the computer to perform steps comprising: providing, via a display, a virtual mobility test in a virtual environment for testing visual function of a user; displaying, during the virtual mobility test, virtual objects for intentional tagging by the user; counting, during or after the virtual mobility test, a number of tagged virtual objects tagged by the user; and assessing the visual function of the user using the count of tagged virtual objects tagged by the user during the virtual mobility test.

Description:
METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR ASSESSING VISUAL FUNCTION USING VIRTUAL MOBILITY TESTS

PRIORITY CLAIM

This application claims the benefit of U.S. Patent Application Serial No. 63/318,921 , filed March 11 , 2022, the disclosure of which is incorporated by reference in its entirety.

TECHNICAL FIELD

The subject matter described herein relates to virtual reality. More particularly, the subject matter described herein relates to methods, systems, and computer readable media for assessing visual function using virtual mobility tests.

BACKGROUND

One challenge with developing treatments for eye disorders involves developing test paradigms that can quickly, accurately, and reproducibly characterize the level of visual function and functional vision in real-life situations. Visual function can encompass many different aspects or parameters of vision, including visual acuity (resolution), visual field extent (peripheral vision), contrast sensitivity, motion detection, color vision, light sensitivity, and the pattern recovery or adaptation to different light exposures, to name a few. Functional vision, i.e., the ability to use vision to carry out different tasks, may therefore be considered a behavioral direct consequence of visual function. These attributes of vision are typically tested in isolation, e.g., in a scenario detached from the real-life use of vision. For example, a physical mobility test involving an obstacle course having various obstacles in a room may be used to evaluate one or more aspects of vision function. However, such a mobility test can involve a number of issues including timeconsuming setup, limited configurability, risk of injury to users, and limited quantitation of results.

Accordingly, there exists a need for methods, systems, and computer readable media for assessing visual function using virtual mobility tests. SUMMARY

Methods, systems, and computer readable media for assessing visual function using virtual mobility tests are disclosed. One system includes a processor and a memory. The system is configured for: providing, via a display, a virtual mobility test in a virtual environment for testing visual function of a user; displaying, during the virtual mobility test, virtual objects for intentional tagging by the user; counting, during or after the virtual mobility test, a number of tagged virtual objects tagged by the user; and assessing the visual function of the user using the count of tagged virtual objects tagged by the user during the virtual mobility test.

One method includes: providing, via a display, a virtual mobility test in a virtual environment for testing visual function of a user; displaying, during the virtual mobility test, virtual objects for intentional tagging by the user; counting, during or after the virtual mobility test, a number of tagged virtual objects tagged by the user; and assessing the visual function of the user using the count of tagged virtual objects tagged by the user during the virtual mobility test.

The subject matter described herein may be implemented in hardware, software, firmware, or any combination thereof. As such, the terms “function” or “node” as used herein refer to hardware, which may also include software and/or firmware components, for implementing the feature(s) being described. In some exemplary implementations, the subject matter described herein may be implemented using a computer readable medium having stored thereon computer executable instructions that when executed by the processor of a computer, control the computer to perform steps. Exemplary computer readable media suitable for implementing the subject matter described herein include non-transitory computer readable media, such as disk memory devices, chip memory devices, programmable logic devices, and application specific integrated circuits. In addition, a computer readable medium that implements the subject matter described herein may be located on a single device or computing platform or may be distributed across multiple devices or computing platforms. In some exemplary implementations, the subject matter described herein may be implemented using hardware, software, firmware delivering augmented or virtual reality.

BRIEF DESCRIPTION OF THE DRAWINGS

The subject matter described herein will now be explained with reference to the accompanying drawings of which:

Figure 1 is a diagram illustrating an example virtual mobility test system (VMTS) for testing visual function;

Figures 2A-2B are diagrams illustrating example templates for virtual mobility tests;

Figure 3 is a diagram illustrating a user performing a virtual mobility test;

Figure 4 is a diagram illustrating example obstacles in a virtual mobility test;

Figure 5 is a diagram illustrating various sized obstacles in a virtual mobility test;

Figure 6 is a diagram illustrating virtual mobility tests with various lighting conditions;

Figures 7A-7B are diagrams illustrating example data captured during virtual mobility tests;

Figures 8A-8B are diagrams illustrating various aspects of example virtual mobility tests;

Figures 9A-9E depict graphs indicating various data gathered from subjects in a study using virtual mobility testing;

Figures 10A-10B depict vision field diagrams associated with subjects having retinal pigment epithelium 65 (RPE65) gene mutations; and

Figure 11 is a flow chart illustrating an example process for assessing visual function using a virtual mobility test.

DETAILED DESCRIPTION

The subject matter described herein relates to methods, systems, and computer readable media for assessing visual function using virtual mobility tests. A conventional mobility test for testing visual function of a user may involve one or more physical obstacle courses and/or other physical activities to perform. Such courses and/or physical activities may be based on real-life scenarios and/or activities, e.g., walking in a dim hallway or walking on a floor cluttered with obstacles. Existing mobility tests, however, have limited configurability and other issues. For example, conventional mobility tests are, by design, generally inflexible and difficult to implement and reproduce since these tests are usually designed using a particular implementation and equipment, e.g., a test designer’s specific hardware, obstacles, and physical space requirements.

One example of a ‘real-life’ or physical mobility test is the MultiLuminance Mobility Test (MLMT; “RPE65”) test for testing for retinal disease that affects ability to see in low luminance conditions, e.g., a retinal dystrophy due to retinal pigment epithelium 65 (RPE65) gene mutations. This physical test measures how a person functions in a vision-related activity of avoiding obstacles while following a pathway in different levels of illumination. While this physical test reflects the everyday life level of vision for RPE65-associated disease, the “RPE65” test suffers from a number of limitations. Example limitations for the “RPE65” test are discussed below.

1 ) The “RPE65” test is limited in usefulness for other populations of low vision patients. For example, the test cannot be used reliably to elicit visual limitations of individuals with fairly good visual acuity (e.g., 20/60 or better) but limited fields of vision.

2) The set-up of the “RPE65” test is challenging in that it requires a dedicated, large space. For example, the test area for the “RPE65” test must be capable of holding a 17 feet (ft) X 10 ft obstacle course, the test user (and companion) and the test operators, and cameras. Further, the room must be light-tight (e.g., not transmitting or reflecting light) and capable of presenting lighting conditions at a range of calibrated, accurate luminance levels (e.g., 1 , 4, 10, 50, 125, 250, and 400 lux). Further, this illumination must be uniform in the test area.

3) Setting-up a physical obstacle course and randomizing assignment and positions of obstacles for the “RPE65” test (even for a limited number of layouts) is time-consuming. 4) Physical objects on a physical obstacle course are injury risk to patients (e.g., obstacles can cause a test user to fall or trip).

5) A “RPE65” test user can cheat during the test by using “echolocation” of objects instead of their vision to identify large objects.

6) A “RPE65” test user must be guided back to the course by the test operator if the user goes off course.

7) The “RPE65” test does not take into account that different individuals have different heights (and thus different visual angles).

8) The “RPE65” test captures video recordings of the subject’s performance which are then graded by outside consultants. This results in potential disclosure of personal identifiers.

The “RPE65” test has difficult and limited quantitation for evaluating a test user’s performance. For example, the scoring system for this test is challenging as it requires review of videos by masked graders and subjective grading of collisions and other aspects of the performance. Further, since the data is collected through videos showing the performance in two dimensions and focuses generally on the feet, there is no opportunity to collect additional relevant data, such as direction of gaze, likelihood of collision with objects beyond the view of the camera lens, velocity in different directions, acceleration, etc.

In accordance with some aspects of the subject matter described herein, techniques, methods, systems, or mechanisms are disclosed for using a virtual (e.g., an extended reality (XR), an augmented reality (AR), a mixed reality (MR), or a virtual reality (VR) based) mobility test. For example, a virtual mobility test system (e.g., a computer, an XR or VR headset, and body movement detection sensors) may configure, generate, and analyze a virtual mobility test for testing visual function of a user. In this example, the test operator or the virtual mobility test system may change virtually any aspect of the virtual mobility test, including, for example, size, shape, and placement of obstacles, lighting conditions, and may provide haptic and audio user feedback, and may use these capabilities to test various different diseases and/or eye or vision conditions. Moreover, since a virtual mobility test does not involve real or physical obstacles, cost and time associated with setting up and administering the virtual mobility test may be significantly reduced compared to a physical mobility test. Further, a virtual mobility test may be configured to efficiently capture and store relevant data not obtained in conventional physical tests (e.g., eye or head movements) and/or may capture data with more precision (e.g., via body movement detection sensors) than in conventional physical tests. With the VR system, the scene can be displayed to one eye or the other or to both eyes simultaneously. Furthermore, with additional and more precise data, a virtual mobility test system or a related entity may produce more objective and/or accurate test results (e.g., user performance scores).

In accordance with some aspects of the subject matter described herein, techniques, methods, systems, or mechanisms are disclosed for assessing visual function using virtual mobility tests. For example, a virtual mobility test system (e.g., a computer, an XR or VR headset, and body movement detection sensors) may be configured for: providing, via a display, a virtual mobility test in a virtual environment for testing visual function of a user; displaying, during the virtual mobility test, virtual objects for intentional tagging by the user; counting, during or after the virtual mobility test, a number of tagged virtual objects tagged by the user; and assessing the visual function of the user using the count of tagged virtual objects tagged by the user during the virtual mobility test.

In accordance with some aspects of the subject matter described herein, a virtual mobility test system may include one or more of the following features: use of a tetherless VR headset; incorporation of virtual obstacles of the sort that present real life challenges in daily living to vision impaired individuals; testing using a defined set of luminance values (e.g., over a 3-log range of intensity); implementation of a standardized test paradigm that randomly presents any of a plurality of different course designs in order to minimize potential learning effect; an automated system for grading performance; and a scoring process that accurately and reproducibly identifies presence and severity of visual impairment in a wide range of vision related conditions. Reference will now be made in detail to exemplary embodiments of the subject matter described herein, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers may be used throughout the drawings to refer to the same or like parts.

Figure 1 is a diagram illustrating an example virtual mobility test system (VMTS) 100 for testing visual function. In Figure 1 , virtual mobility test system (VMTS) 100 may include a processing platform 101 , a user display 108, and one or more sensors 110 are depicted. VMTS 100 may represent any suitable entity or entities (e.g., a VIVE virtual reality (VR) system, an extended reality (XR) system, an augmented reality (AR) system, a mixed reality (MR) system, one or more servers, a desktop computer, a phone, a tablet computer, or a laptop) for testing visual function in a virtual (e.g., VR) environment. For example, VMTS 100 may be a laptop or a desktop computer executing one or more applications and may interact with various modules or components therein. In some embodiments, VMTS 100 may include a communications interface for receiving configuration information associated with generating or setting up a virtual environment for testing various aspects of visual function and/or detecting whether a user (e.g., a test participant or subject) may be displaying symptoms or characteristics of one or more eye issues or related conditions. In some embodiments, VMTS 100 may include one or more communications interfaces for receiving sensor data or feedback from one or more physical sensors 110 associated with a test user. For example, sensors 110 may detect body movement (e.g., of feet, arms, and head), along with related characteristics (e.g., speed and/or direction of movement). In some embodiments, VMTS 100 may include one or more processors, memories, and/or other hardware for generating a virtual environment, displaying a virtual environment (e.g., including various user interactions with the environment and related effects caused by the interactions, such as collisions with virtual obstacles, movements that the user makes to avoid collision (such as lifting up a leg to step over an object or ducking under a sign) or purposeful touch such as stepping on a “step” or touching a finish line), and recording or storing test output and related test results (e.g., including text based logs indicating sensor data and/or a video recreation of the test involving a user’s progress during the test).

In some embodiments, VMTS 100 may utilize processing platform 101 for providing various functionality. Processing platform 101 may represent any suitable entity or entities (e.g., one or more processors, computers, nodes, or computing platforms) for implementing various modules or system components. For example, processing platform 101 may include a server or computing device containing one or more processors and memory (e.g., flash, random-access memory, or data storage). In this example, various software and/or firmware modules may be implemented using the hardware at processing platform 101. In some embodiments, processing platform 101 may be communicatively connected to user display 108 and/or sensors 110.

In some embodiments, VMTS 100 or processing platform 101 may include a test controller (TC) 102, a sensor data collector 104, and a data storage 106. TC 102 may represent any suitable entity or entities (e.g., software executing on one or more processors) for performing one or more aspects associated with visual function testing in a virtual environment. For example, TC 102 may include functionality for configuring and generating a virtual environment for testing visual function of a user. In this example, TC 102 may also be configured for executing a related mobility test, providing output to user display 108 (e.g., a virtual reality (VR) display) or other device, receiving input from one or more sensors 110 (e.g., accelerometers, gyroscopes, eye trackers, or other body movement sensing devices) or other devices (e.g., video cameras). Continuing with this example, TC 102 may be configured to analyze various input associated with a virtual mobility test and provide various metrics and test results, e.g., a virtual recreation or replay of a user performing the virtual mobility test.

In some embodiments, TC 102 may communicate or interact with user display 108. User display 108 may represent any suitable entity or entities for receiving and providing information (e.g., audio, video, and/or haptic feedback) to a user. For example, user display 108 may include a VR headset (e.g., a VIVE VR headset or an Oculus Quest2 VR headset), glasses, a mobile device, and/or another device that includes software executing on one or more processors. In this example, user display 108 may include various communications interfaces capable of communicating with TC 102, VMTS 100, sensors 110, and/or other entities. In some embodiments, TC 102 or VMTS 100 may stream data for displaying a virtual environment to user display 108. For example, TC 102 or VMTS 100 may receive input during testing from various sensors 110 related to a user’s progress through a mobility test (e.g., obstacle course) in the virtual environment and may send data (e.g., in real-time or near real-time) to reflect or depict a user’s progress along the course based on a variety of factors, e.g., a preconfigured obstacle course map and user interactions or received feedback from sensors 110. In another example, e.g., where a user's goal is to tag various virtual objects during a mobility test, TC 102 or VMTS 100 may receive input during testing from various sensors 110 indicating that a user is actively touching (e.g., with a virtual hand, finger, or body part) a virtual object and while the user is also pressing one or more buttons or performing another input action via a physical input device (e.g., a game controller). In another example, a user may perform an input action (e.g., raising two controllers above their heads and/or pressing a button combination) that triggers a game or environment event, e.g., ending a virtual mobility test or restarting a virtual mobility test.

In some embodiments, TC 102 may communicate or interact with sensor data collector 104. Sensor data collector 104 may represent any suitable entity or entities (e.g., software executing on one or more processors and/or one or more communications interfaces) for receiving or obtaining sensor data and/or other information from sensors 110 (e.g., body movement detection sensors). For example, sensor data collector 104 may include an antenna or other hardware for receiving input via wireless technologies, e.g., Wi-Fi, Bluetooth, etc. In this example, sensor data collector 104 may be capable of identifying, collating, and/or analyzing input from various sensors 110. In some embodiments, sensors 110 may include accelerometers and/or gyroscopes to detect various aspects of body movements. In some embodiments, sensors 110 may include one or more surface electrodes attached to the skin of a user and sensor data collector 104 (or TC 102) may analyze and interpret EMG data into body movement. In some embodiments, sensors 110 or related components may be part of an integrated and/or wearable device, such as a VR display, a wristband, armband, glove, leg band, sock, headband, mask, sleeve, shirt, pants, or other device. For example, sensors 110 may be located at or near user display 108. In this example, such sensors 110 may be configured to identify or track eye movement, squinting, pupil changes, and/or other aspects related to eyesight.

In some embodiments, VMTS 100 or one or more modules therein (e.g., TC 102 and/or sensor data collector 104) may provide functionality for tailoring a virtual mobility test (e.g., a mobility test in a virtual environment using a VR system) to an individual and/or an eye or vision condition or disease. For example, a virtual mobility test as described herein may be administered such that each eye is used separately or both eyes are used together. In this example, by using this ability, the progression of an eye disease or the impact of an intervention can be measured according to the effects on each individual eye. In many eye diseases, there is symmetry between the eyes. Thus, if an intervention is tested in one eye, the other eye can serve as an untreated control and the difference in the performance between the two eyes can be used to evaluate safety and efficacy of the intervention. For example, data can be gathered from monocular (e.g., single eye) tests on the user’s perspective (e.g., is one object in front of another) and from binocular (e.g., two eyes) tests on the user’s depth perception and stereopsis.

In some embodiments, VMTS 100 or one or more modules therein may configure a virtual mobility test for monitoring eye health and/or related vision over a period of time. For example, changes from a baseline and/or changes in one eye compared to the other eye may measure the clinical utility of a treatment in that an increase in visually based orientation and mobility skills increases an individual’s safety and independence. Further, gaining the ability to orient and navigate under different conditions (e.g., using lower light levels than previously possible) may reflect an improvement of those activities of daily living that depend on vision.

In some embodiments, VMTS 100 or one or more modules therein may perform virtual mobility tests for a variety of purposes. For example, a virtual mobility test may be used for rehabilitation purposes (e.g., as part of exercises that can potentially improve the use of vision function or maintain existing vision function). In another example, a virtual mobility test may also be used for machine learning and artificial intelligence purposes.

In some embodiments, a virtual mobility test may be configured (e.g., using operator preferences or settings) to include content tailored to a particular vision condition or disease. In some embodiments, the configured content may be usable to facilitate or rule out a diagnosis and may at least in part be based on known symptoms associated with a particular vision condition. For example, there are different deficits in different ophthalmic diseases ranging from light sensitivity, color detection, contrast perception, depth perception, focus, movement perception, etc. In this example, a virtual mobility test may be configured such that each of these features can be tested, e.g., by controlling these variables (e.g., by adjusting lighting conditions and/or other conditions in the virtual environment where the virtual mobility test is to occur).

In some embodiments, VMTS 100 or one or more modules therein may configure, generate, perform, or provide a virtual mobility test comprising a number of virtual objects for intentional tagging by the user. For example, the virtual mobility test may include a virtual environment (e.g., an XR or VR environment) that includes a path for a user to walk and along, near, or on the path are an assortment of virtual objects having various attributes, e.g., size, location, brightness, color, speed, shape, etc. In this example, during the testing, the tester may be tasked with intentional tagging as many of the virtual objects that they can see, e.g., via a tagging procedure that involves touching the virtual object while pressing a button on a game controller.

In some embodiments, VMTS 100 or one or more modules therein may assess or evaluate visual function of a user of a virtual mobility test using a count of tagged virtual objects tagged by the user during the virtual mobility test. For example, VMTS 100 or one or more modules therein may count (e.g., while the test is ongoing or after the virtual mobility test has completed) the virtual objects tagged by the user and then use the count or a score derived from the count (or counts or scores from a series of tests) to determine the user's visual function.

In some embodiments, visual function assessment using a count of tagged virtual objects may involve comparing the count of tagged virtual objects or a derived score to a threshold value associated with normal sight and/or abnormal sight (e.g., threshold values may be based on results from tested subject populations). For example, for a given virtual mobility test comprising 10 taggable virtual objects, a threshold value for indicating normal sight may be 9 tagged virtual objects. In this example, if a subject completes the test with at least 9 tagged virtual objects then VMTS 100 or one or more modules therein may generate an assessment indicating that the subject has normal sight. In another example, for a given virtual mobility test comprising 10 taggable virtual objects, a threshold value for indicating abnormal sight (e.g., retinal degeneration) may be 7 tagged virtual objects. In this example, if a subject completes the test with 7 or less tagged virtual objects then VMTS 100 or one or more modules therein may generate an assessment indicating that the subject has abnormal sight. Continuing with this example, after an assessment for a given test or series of tests, VMTS 100 or one or more modules therein may be configured to generate and perform additional tests for determining or identifying particular vision conditions or severity thereof.

In some embodiments, a virtual mobility test may be configured to measure and/or evaluate symptoms of one or more retinal diseases, vision conditions, or related issues. For example, VMTS 100 or one or more modules therein may use predefined knowledge of symptoms regarding a vision condition to generate or configure a virtual mobility test for measuring and/or evaluating aspects of those symptoms. In this example, when generating or configuring a virtual mobility test for measuring and/or evaluating certain symptoms, VMTS 100 or one or more modules may query a data store using the predefined symptoms to identify predefined tasks and/or course portions usable for measuring and/or evaluating those symptoms. Example retinal different retinal diseases, vision conditions, or related issues that a virtual mobility test may be configured for include macular degeneration, optic nerve disease, retinitis pigmentosa (RP), choroideremia (CHM), one or more forms of color blindness, blue cone monochromacy, achromatopsia, diabetic retinopathy, retinal ischemia, or various central nervous system (CNS) disorders that affect vision.

In some embodiments, a virtual mobility test for measuring symptoms of macular degeneration (e.g., age-related macular degeneration, Stargardt disease, cone-rod dystrophy, etc.) may be configured and administered using VMTS 100 or one or more modules therein. A macular degeneration related virtual mobility test may be configured for measuring one or more aspects of macula function, e.g., visual acuity, color discrimination, and/or contrast sensitivity. In some examples, a macular degeneration related virtual mobility test may use arrows (e.g., directional course arrows) to measure visual acuity of the user, e.g., arrows may initially be designated to be visible with 20/200 Snellen visual acuity (e.g., low vision) but the arrows can be made smaller or larger to reflect the sizes normally measured on the visual acuity early treatment diabetic retinopathy study (ETDRS) chart (e.g., 20/15, 20/20, 20/25, 20/40, 20/50, 20/63, 20/80, 20/100, 20/200). In some examples, a macular degeneration related virtual mobility test may involve modifying colors of arrows, background, and/or objects to measure color discrimination because with macular disease often detection of lower wavelengths of light (e.g., shades of yellow, purple, and pastels) is the first to be impaired in the disease process. As the disease progresses, all color perception may be lost leaving the person able to discriminate only signals mediated by rod photoreceptors (e.g., shades of grey). In order to elicit color perception abilities, arrows of one color can be placed over a background of another color similar to how numbers are presented in an Ishihara color vision test, e.g., using pseudoisochromatic plates. In some examples, a macular degeneration related virtual mobility test may present objects in colors that contrast with the color of the background. The contrast of the arrows and objects and shading of their edges in a virtual mobility test may also be modified to measure contrast discrimination because individuals with macular disease often have trouble identifying faces, objects, and obstacles due to their impaired ability to detect contrast. In some embodiments, a virtual mobility test for measuring symptoms of optic nerve disease (e.g., glaucoma, optic neuritis, mitochondrial disorders such as Leber’s hereditary optic neuropathy) may be configured and administered using VMTS 100 or one or more modules therein. Because of the known symptoms of optic nerve disease, an optic nerve disease related virtual mobility test may be configured to measure and/or evaluate visual fields and light sensitivity of a user. In some examples, an optic nerve disease related virtual mobility test may present swinging objects from different directions and the test may measure or evaluate a user’s ability to detect these objects (e.g., by avoidance of collisions) while navigating the mobility course. The swinging objects in the test may be shown at different sizes in order to further elicit and evaluate the user’s ability to use peripheral vision. In some examples, an optic nerve disease related virtual mobility test may present swinging objects with different luminances in order to measure changes in light sensitivity associated with disease progression. The brighter lights may be perceived by users with optic never disease as having halos, which may have an impact on the user’s avoidance of the swinging objects. The user can be asked beforehand to report any perception of halos around lights and can be documented and used for review or test analysis. In some examples, an optic nerve disease related virtual mobility test may present swinging objects with different levels of contrast, in order to measure changes in contrast sensitivity associated with disease progression. In some examples, an optic nerve disease related virtual mobility test may involve using brightness and/or luminance of arrows and objects in a related mobility course to measure brightness discrimination.

In some embodiments, a virtual mobility test for measuring symptoms of RP in any of its various forms (e.g., RP found in a syndromic disease such as Usher Syndrome, Bardet-Biedl, Joubert, etc.) may be configured and administered using VMTS 100 or one or more modules therein. Symptoms of RP can include loss of peripheral vision followed eventually by loss of central vision and night blindness. Depending on the stage of disease, an RP related virtual mobility test may include a similar protocol established for the RPE65 form of Leber’s congenital amaurosis. In some embodiments, e.g., in order to further evaluate peripheral vision, an RP related virtual mobility test may present swinging objects from different directions, with different sizes, and different luminance (as described above for optic nerve disease testing). For further analyses of light sensitivity, an RP related virtual mobility test may involve a user using hand tracking and eye-tracking to control dimmer switches for virtual lights, where the user may set the brightness of the lights to where they think that they see best. The user’s perceptions of the conditions in which they see best can be compared with results measured using a standardized test.

In some embodiments, a virtual mobility test for measuring symptoms of CHM may be configured and administered using VMTS 100 or one or more modules therein. A CHM related virtual mobility test may be configured differently depending on the stage of the disease and/or age of the user (e.g., the person performing the test). For example, when testing juveniles with CHM, a CHM related virtual mobility test may be configured to focus on evaluating a user’s light sensitivity. However, since individuals with CHM usually have good visual acuity, arrows in a CHM related virtual mobility test may be small in size (on the order of 20/15 Snellen visual acuity). As the disease progresses, individuals with CHM may lose their visual fields and may also suffer from glare and difficulties in equilibrating if light levels are rapidly changed. Therefore, a CHM related virtual mobility test may present a number of swinging objects (e.g., to probe visual field loss), and lights that flash at designated intervals (e.g., to mimic glare and changes in light levels). In some embodiments, in lieu of swinging objects, a CHM related virtual mobility test may include objects and/or situations that might take place in daily life, e.g., birds flying overhead or a tree branch swaying in the breeze. Such situations may allow the user to position themselves such that glare is blocked by the flying object. In some embodiments, user interactions with daily life situations may act as a game or provide a sense of play. In such embodiments, VMTS 100 or one or more modules therein may use eye tracking to measure user response objectively with regard to interactive situations. In some examples, a CHM related virtual mobility test may involve dimming lighting and/or altering contrast levels at prescribed intervals or aperiodically (e.g., randomly). In some embodiments, a virtual mobility test for measuring symptoms of red-green color blindness may be configured and administered using VMTS 100 or one or more modules therein. A red-green color blindness related virtual mobility test may focus on measuring or evaluating a user’s ability to discern or detect different colors, e.g., red and green. In some examples, a red-green color blindness related virtual mobility test may present arrows in green on a red background (or vice versa; red arrows on a green background). In addition to or alternatively, in some examples, a red-green color blindness related virtual mobility test may present obstacles as red objects on a green background.

In some embodiments, a virtual mobility test for measuring symptoms of blue-yellow color blindness may be configured and administered using VMTS 100 or one or more modules therein. A blue-yellow color blindness related virtual mobility test may focus on measuring or evaluating a user’s ability to discern or detect different colors, e.g., blue and yellow. In some examples, a blue-yellow color blindness related virtual mobility test may present arrows in blue on a yellow background (or vice versa; yellow arrows on a blue background). In addition to or alternatively, in some examples, a blue-yellow color blindness related virtual mobility test may present obstacles as yellow objects on a blue background.

In some embodiments, a virtual mobility test for measuring symptoms of blue cone monochromacy may be configured and administered using VMTS 100 or one or more modules therein. A blue cone monochromacy related virtual mobility test may focus on measuring or evaluating a user’s ability to discern or detect different colors. In some examples, a blue cone monochromacy related virtual mobility test may involve a virtual course or portion thereof being presented in greyscale for testing more detail of what the user sees. In addition to or alternatively, in some examples, a blue cone monochromacy related virtual mobility test may involve a virtual course or portion thereof being presented one color or two different colors. In some example where two colors are used, a blue cone monochromacy related virtual mobility test may present arrows in blue on a yellow background (or vice versa; yellow arrows on a blue background) or arrows in blue on a yellow background (or vice versa; yellow arrows on a blue background). In addition to or alternatively, in some examples, a blue cone monochromacy related virtual mobility test may present obstacles as yellow objects on a blue background or obstacles as red objects on a green background. In some embodiments, VMTS 100 or one or more modules therein may compare the user’s performances between the differently colored courses, e.g., change in performance from the greyscale course to the blue-yellow course.

In some embodiments, a virtual mobility test for measuring symptoms of achromatopsia may be configured and administered using VMTS 100 or one or more modules therein. Individuals with achromatopsia may suffer from sensitivity to lights and glare, have poor visual acuity, and impaired color vision, e.g., they may only see objects in shades of grey and black and white. In some examples, instead of starting to present a mobility course with dim light (e.g., as with a RPE65-LCA related test), an achromatopsia related virtual mobility test may initially present a mobility course with bright light and then subsequent testing may test whether the user can perform more accurately at dimmer light, e.g., by decreasing brightness in subsequent runs. In some examples, an achromatopsia related virtual mobility test may determine a threshold lighting value at which a user is able to perform the test accurately, e.g., complete a related mobility course with an acceptable number of collisions, such less than two collisions. In some examples, an achromatopsia related virtual mobility test may involve a user using hand tracking and eyetracking to control dimmer switches for virtual lights, where the user may set the brightness of the lights to where they think that they see best. The user’s perceptions of the conditions in which they see best can be compared with results measured using a standardized test. In some examples, an achromatopsia related virtual mobility test may present arrows and/or obstacles in selected color combinations similar to that described for red- green color blindness or in blue cone monochromacy.

In some embodiments, a virtual mobility test for measuring symptoms of diabetic retinopathy may be configured and administered using VMTS 100 or one or more modules therein. Symptoms of diabetic retinopathy can include blurred vision, impaired field of view, difficulty with color discrimination. In some examples, a diabetic retinopathy related virtual mobility test may use arrows (e.g., directional course arrows) to measure visual acuity of the user, e.g., arrows may initially be designated to be visible with 20/200 Snellen visual acuity (e.g., low vision) but the arrows can be made smaller or larger to reflect the sizes normally measured on the visual acuity ETDRS chart. In some examples, e.g., in one or more iterations of a diabetic retinopathy related virtual mobility test, a user could be provided a virtual dial that they can spin to optimize focus. Their perceived optimal focus could be compared with what is measured using a standardized test or other testing. In some examples, a diabetic retinopathy related virtual mobility test may involve modifying colors of arrows, background, and/or objects to measure color discrimination. In order to elicit color perception abilities, arrows in a diabetic retinopathy related virtual mobility test of one color can be placed over a background of another color similar to how numbers are presented in an Ishihara color vision test, e.g., using pseudoisochromatic plates. In some examples, a diabetic retinopathy related virtual mobility test may present objects in colors that contrast with the color of the background. In some examples, VMTS 100 or one or more modules therein may utilize haptic feedback with a diabetic retinopathy related virtual mobility test, e.g., by providing vibrations when a user approaches objects or obstacles. In such examples, haptic feedback or other audio components can be utilized with a diabetic retinopathy related virtual mobility test for testing whether a user utilizes echo-location (spatial audio) in their daily life.

In some embodiments, a virtual mobility test for measuring symptoms of retinal ischemia may be configured and administered using VMTS 100 or one or more modules therein. Symptoms of retinal ischemia can include blurred vision, graying or dimming of vision and/or loss of visual field. In some examples, a retinal ischemia related virtual mobility test may use arrows (e.g., directional course arrows) to measure visual acuity of the user, e.g., arrows may initially be designated to be visible with 20/200 Snellen visual acuity (e.g., low vision) but the arrows can be made smaller or larger to reflect the sizes normally measured on the visual acuity ETDRS chart. In some examples, a retinal ischemia related virtual mobility test may involve modifying colors of arrows, background, and/or objects to measure color discrimination. In order to elicit color perception abilities, arrows in a retinal ischemia related virtual mobility test of one color can be placed over a background of another color similar to how numbers are presented in an Ishihara color vision test, e.g., using pseudoisochromatic plates. In some examples, a retinal ischemia related virtual mobility test may present objects in colors that contrast with the color of the background. In some examples, a retinal ischemia related virtual mobility test may present a number of swinging objects (e.g., to probe visual field loss).

In some embodiments, a virtual mobility test for measuring symptoms of vision-affecting CNS disorders (e.g., a stroke or a brain tumor) may be configured and administered using VMTS 100 or one or more modules therein. Vision-affecting CNS disorders can result in vision observed only on one side for each eye or for both eyes together. As such, in some examples, a vision-affecting CNS disorder related virtual mobility test may involve testing various aspects associated with vision fields of a user. In some examples, a vision-affecting CNS disorder related virtual mobility test may present swinging objects from different directions and the test may measure or evaluate a user’s ability to detect these objects (e.g., by avoidance of collisions) while navigating the mobility course. The swinging objects in the test may be shown at different sizes in order to further elicit and evaluate the user’s ability to use peripheral vision.

In some embodiments, a virtual mobility test may be configured to include obstacles that represent challenges an individual can face in daily life, such as doorsteps, holes in the ground, objects that are left on the floor or that jut in a user’s path, objects at various heights (e.g., waist high, head high, etc.), and objects which can swing into the user’s path. In such embodiments, risk of injury may be significantly reduced relative to a conventional mobility test since the obstacles in the virtual mobility test are virtual and not real.

In some embodiments, virtual obstacles (e.g., obstacles in a virtual mobility test or a related virtual environment) can be adjusted or resized dynamically or prior to testing. For example, virtual obstacles, as a group or individually, may be enlarged or reduced by a certain factor (50%) via a test operator and/or VMTS 100. In this example, a virtual mobility test may be configured to include dynamic obstacles that increase or decrease in size, e.g., if a user repeatedly hits the obstacle or cannot move past the obstacle.

In some embodiments, a virtual mobility test or a related obstacle course therein may be adjustable based on a user’s profile or related characteristics, e.g., height, weight, fitness level, age, or known deficiencies. For example, scalable obstacle courses may be useful for comparisons of performance of individuals who differ in height as user’s height (e.g., distance of the eyes to the objects on the ground) affects visual resolution (e.g., visual acuity). In another example, scalable obstacle courses may be useful for following the visual performance of a child over time, e.g., as the child will grow and become an adult. In some embodiments, scaling an obstacle course may also be useful to ensure that obstacles or elements in the virtual environment (e.g., tiles that make of a course segments) are sized appropriately (e.g., so that a user’s foot can fit along an approved path through the virtual obstacle course).

In some embodiments, a virtual mobility test or a related obstacle course therein may be adjustable so as to avoid or mitigate learning bias. In such embodiments, adjustment or modification may be performed such that a particular skill level or complexity for the test or course is maintained or represented. For example, VMTS 100 may adjust a path and/or various locations of obstacles presented in a virtual mobility test so as to prevent or mitigate learning bias by a user. In this example, VMTS 100 may utilize an algorithm so that the modified virtual mobility test is substantially equivalent to an original virtual mobility test. In some embodiments, to achieve equivalence, VMTS 100 may utilize a ‘course shuffling’ algorithm that ensures the modified virtual mobility test includes similar number and types of obstacles, number and types of tasks, path complexity, and luminance levels as an initial virtual mobility test.

In some embodiments, a virtual mobility test or a related obstacle course therein may be configured, generated, or displayed based on various configurable settings. For example, a test operator may input or modify a configuration files with various settings. In this example, VMTS 100 or one or more modules therein may use the settings to configure, generate, and/or display the virtual mobility test or a related obstacle course therein.

In some embodiments, VMTS 100 or one or more modules therein may configure a virtual mobility test for testing a user’s vision function in a variety of lighting conditions. For example, light levels utilized for a virtual mobility test may be routinely encountered in day-to-day situations, such as walking through an office building, crossing a street at dusk, or locating objects in a dimly-lit restaurant.

In some embodiments, VMTS 100 or one or more modules therein may adjust lighting conditions for a virtual environment or related obstacle course. In this example, VMTS 100 or one or more modules therein may adjust luminance of obstacles, path arrows, hands and feet, finish line, and/or floor tiles associated with the virtual environment or related obstacle course. In another example, VMTS 100 or one or more modules therein may design aspects (e.g., objects, obstacles, terrain, etc.) of the virtual environment to minimize light bleeding and/or other issues that can affect test results (e.g., by using Gaussian textures on various virtual obstacles or other virtual objects).

In some embodiments, a virtual mobility test may be configured such that various types of user feedback are provided to a user. In some embodiments, auditory or haptic feedback may be provided (e.g., via an XR headset, like user display 108) or to a user (e.g., of VTMS 100) when a feedback condition (e.g., a successful tagging procedure or an unsuccessful tagging procedure occurs). For example, auditory feedback for a successful tagging procedure may include a chime and/or a voice saying "tagging successful", while auditory feedback for an unsuccessful tagging procedure may include a bong and/or a voice saying "tagging unsuccessful, please try again". In another example, haptic feedback for a successful tagging procedure may be a short, pleasant buzz, while haptic feedback for a unsuccessful tagging procedure may be a longer, less pleasant rumble. In another example, auditory feedback may be provided when a user steps on a starting platform and when the user exits or completes a virtual mobility test or course. In some embodiments, three-dimensional (3-D) spatial auditory feedback may be provided to a user (e.g., via speakers associated with user display 108 or VMTS 100) when the user collides with an obstacle during a virtual mobility test. In this example, the auditory feedback may emulate a real- life sound or response (e.g., a ‘clanging’ sound or a ‘scraping’ sound depending on the obstacle, or a click when the user climbs up a “step”) and may be usable by the user to correct their direction or movements. In another example, haptic feedback may be provided to a user (e.g., via speakers associated with user display 108 or VMTS 100) when the user goes off-course (e.g., away from a designated path) in the virtual environment. In this example, by using haptic feedback, the user can be made aware of this occurrence without requiring a test operator to physically guide them back on-course and can also test whether the user can self-redirect appropriately without assistance.

In some embodiments, VMTS 100 or one or more modules therein (e.g., TC 102 and/or sensor data collector 104) may analyze various data associated with a virtual mobility test. For example, VMTS 100 or modules therein may record a virtual mobility user’s performance using sensors 110 and/or one or more video cameras. In this example, the data captured may be measured and analyzed using quantitative analysis (e.g., based on objective criteria). In some embodiments, in contrast to conventional mobility test, there may be little to no subjective interpretation of the performance. For example, from the start to the finish of a virtual mobility test (e.g., timed from when the virtual environment or scene is displayed until the user touches a finish flag at the finish line), details of each collision, details of movement of the user’s head, hands, and feet may be recorded and analyzed. In some embodiments, additional sensors (e.g., eyesight trackers and/or other devices) may be used to detect and record movements of other parts of the body.

In some embodiments, an obstacle in a virtual mobility test may include an object adjacent to a user’s path (e.g., a rectangular object, a hanging sign, a floating object), a black tile or an off-course (e.g., off-path) area, a “pushdown” or “step-down” object that a user must press or depress, (otherwise there is a penalty for collision or avoidance of this object), or an object on the user’s path that needs to be stepped over.

In some embodiments, data captured digitally during testing may be analyzed for performance of the user. For example, the time before taking the first step, or the time necessary to complete a virtual mobility test and the number of errors (e.g., bumping into obstacles, using feet to ‘feel’ one’s way, and/or going off course and then correcting themselves after receiving auditory feedback) or the attempt of the user to correct themselves after they have collided with an obstacle may all be assessed to develop a composite analysis metric or score. In some embodiments, an audiotape and/or videotape may be generated during a virtual mobility test. In such example, digital records (e.g., sensor data or related information) and the audiotape and/or videotape may comprise source data for analyzing a user’s performance.

In some embodiments, VMTS 100 or related entities may score or measure a user’s performance during a virtual mobility test by using one or more scoring parameters. Some example scoring parameters may include a collision penalty may be assigned each time a particular obstacle is bumped or a score penalty for each obstacle bumped (even if an obstacle is bumped multiple times); an off-course penalty may be assigned if both feet are on tile(s) that do not have arrows or if the user bypasses tiles with arrows on the course (if one foot straddles the border of an adjacent tile or if the user steps backward on the course to take a second look, this may not considered off- course); a guidance penalty may be assigned if a user needs to be directed back on course by the test giver (or the virtual environment).

In some embodiments, VMTS 100 or related entities (e.g., a data storage 106, sensor data collector 104, or external device) may store test data and/or record a user’s performance in a virtual mobility test. For example, VMTS 100 or another element may record a user’s progress by recording frame by frame movement of head, hands, and feet using data from one or more sensors 110. In some embodiments, data associated with each collision between a user and an obstacle may be recorded and/or captured. In such embodiments, a captured collision may include data related to bodies or items involved, velocity of the body part(s) (e.g., head, foot, arm, etc.) involved in the collision, acceleration of the body part(s) (e.g., head, foot, arm, etc.) involved in the collision, the point of impact, the time and/or duration of impact, and scene or occurrence playback (e.g., the playback may include a replay (e.g., a video) of an avatar (e.g., graphics representing the user or body parts thereof) performing the body part movements that cause the collision).

In some embodiments, administering a virtual mobility test may include a user (e.g., a test participant) and one or more test operators, observers, or assistants. For example, a virtual mobility test may be conducted by a study team member and a technical assistant. The study team member may alternatively both administer the test and may monitor equipment used in the testing. The study team member may be present to help the user with course redirects or physical guidance, if necessary. The test operators, observers, and/or assistants may not give instructions or advise during the virtual mobility test. In some embodiments, a virtual mobility test may be conducted on a level floor in a space appropriate for the test, e.g., in a room with clearance of a 12 feet (ft) x 7 ft rectangular space, since the test may include one or more courses that require a user to turn in different directions and avoid obstacles of various sizes and heights along the way.

In some embodiments, before administering a virtual mobility test the virtual mobility test may be described to the user and the goals of the test may be explained (e.g., complete the course(s) as accurately and as quickly as possible). The user may be instructed to do their best to avoid all of the obstacles except for the steps, and to stay on the path. The user may be encouraged to take their time and focus on accuracy. The user may be reminded not only to look down for guidance arrows showing the direction to walk, but also to scan back and forth with their eyes so as to avoid obstacles that may be on the ground or at any height up to their head.

In some embodiments, a user may be given a practice session so that they understand how to use equipment, recognize guidance arrows that must be followed, are familiar with the obstacles and how to avoid or overcome them (e.g., how to step on the “push down” obstacles), and also how to complete the virtual mobility test (e.g., by touching a finish flag to mark completion of the test or exiting a door). The user may be reminded that during the official or scored test, that the course may be displayed to one eye or the other or to both eyes. The user may be told that they will not receive directions while the test is in progress. However, under certain circumstances (e.g., if the user does not know which way to go and pauses for more than 15 seconds, the tester or an element of the virtual mobility test (e.g., flashing arrows, words, sounds, etc.) may recommend that the user chooses a direction or use a specific maneuver of the controller(s) to request a new configuration (e.g., raising both user controllers over their head for 5 seconds and a new course will appear). The tester may also assist and/or assure the user regarding safety issues, e.g., the tester may stop the user if a particular direction puts the user at risk of injury.

In some embodiments, a user may be given one or more different practice tests (e.g., three tests or as many as are necessary to ensure that the user understands how to take the test). A practice test may use one or two courses that are different from courses used in the non-practice tests (e.g., tests that are scored) to be given. The same practice courses may be used for each user. The practice runs of a user may be recorded; however, the practice runs may not be scored.

In some embodiments, when a user is ready for an official (e.g., scored) virtual mobility test, the user may be fitted with user display 108 (e.g., a VR headset) and sensors 110 (e.g., body movement detection sensors). The user may also be dark adapted prior to the virtual mobility test. The user may be led to the virtual mobility test origin area and instructed to begin the test once the VR scene (e.g., the virtual environment) is displayed in user display 108. Alternatively, the user may be asked to move to a location containing a virtual illuminated circle on the floor which, when the test is illuminated, will become the starting point of the test. The onset of VR scene in user display 108 may mark the start of the test. During the test, an obstacle course may be traversed first with one eye “patched” or unable to see the VR scene (e.g., user display 108 may not show visuals on the left (eye) display, but show visuals on the right (eye) display), then the other eye “patched”, then both eyes “un-patched” or able to see the VR scene (e.g., user display 108 may show visuals on both the left and the right (eye) displays). The mobility test may involve various iterations of an obstacle course at different light intensities (e.g., incrementally dimmer or brighter), and at different layouts or configurations of elements therein (e.g., the path taken and the obstacles along the path may be changed after each iteration. For example, each obstacle course attempted by a user may have the same number of guidance arrows, turns, and obstacles, but to preclude a learning effect or bias, each attempt by the user may be performed using a different iteration of the obstacle course.

In some embodiments, a virtual mobility test or a related test presentation (e.g., administration) may be generated or modified for various purposes, e.g., data capture, data analysis, and/or educational purposes. For example, VMTS 100 or one or more modules therein may generate and administer a virtual mobility test that mimics a given vision condition. In this example, mimicking a vision condition may involve affecting a presentation of a virtual mobility course, e.g., blurring and/or dimming a virtual scene shown in a VR headset so that a user with normal sight (e.g., 20/20 vision and no known vision conditions) experiences symptoms of the vision condition. In this example, VMTS 100 or one or more modules therein may use results from the affected tests to generate ‘condition-affected’ baseline results obtained using normally-sighted users and/or for educating people about certain vision conditions.

In some embodiments, a virtual mobility test or a related test presentation may be generated or modified for diagnosing specific vision disorders. In some examples, a virtual mobility test or a related test presentation (e.g., administration) may diagnose forms of RP or Leber’s congenital amaurosis by testing a user’s performance using a virtual mobility course under different levels of luminance. In some examples, a virtual mobility test or a related test presentation (e.g., administration) may precisely measure how much red, green, and/or blue needs to be present (e.g., in a virtual object or background) to be detectable by a user and may use this measurement for diagnosing color blindness, achromatopsia or other disorders of central vision. In such examples, the test or presentation may involve adjusting levels of red, green, and blue light and/or altering colors of obstacles or backgrounds during testing.

In some embodiments, a virtual mobility test or a related test presentation may be generated or modified for characterizing loss of peripheral vision (e.g., in individuals with glaucoma or RP) by testing different degrees peripheral vision only in the goggles (e.g., with no central vision). By incorporating eye-tracking and adding progressively more areas to view, a virtual mobility test or a related test presentation may determine the exact extent of peripheral field loss. In some examples, a peripheral vision related test or presentation may include an option for making virtual walls encroach upon or expand about a user depending on their performance on a given test. This may be analogous to a “staircase testing fashion” usable for measuring light sensitivity in the RPE65 form of Leber’s congenital amaurosis (except that it is applied to visual fields).

In some embodiments, a virtual mobility test or a related test presentation may be generated or modified for characterizing nystagmus (e.g., abnormal rotatory eye movements found in a number of vision disorders including Leber’s congenital amaurosis, ocular albinism, use of certain drugs, neurologic conditions) by using pupil-tracking to measure the amplitude of nystagmus and changes in amplitude associated with gaze or field of view. Nystagmus is associated with loss of visual acuity and so characterization and identification of nystagmus may lead to treatments which dampen nystagmus and thus improve vision.

In some embodiments, a virtual mobility test or a related test presentation may be generated or modified for assessing stereo vision by using a stereo graphic representation of a mobility course that measures the virtual distance of a user to the virtual objects and can be used to measure depth perception and the user’s sense of proximity to objects. The tester can query how many steps does the user need to take to get to the door or to a stop sign, for example.

In some embodiments, a virtual mobility test or a related test presentation may be generated or modified for individuals with additional (e.g., non-visual) conditions or disabilities. For example, for individuals with impaired mobility, instead of using their legs to walk, their hands can be used to point and/or click in the direction the individual chooses to move. In some examples, as a user is “moving” through a virtual mobility course, the user can point and click at various obstacles, thereby indicating that the individual recognizes them and is avoiding them. In another example, if movement of the legs or hands of a user cannot be monitored, VMTS 100 or related entities may utilize pupil tracking software to allow scoring based on the changes in direction of the gaze of the user. In some examples, data derived from pupil tracking software can complement data obtained from trackers tracking other body parts of the user. In some examples, when administering a virtual mobility test, VMTS 100 or related entities may provide auditory or tactile feedback to users with certain conditions for indicating whether or not they collided with an object. In such examples, auditory feedback may be provided through earphones or speakers on a headset and tactile feedback may be provided using vibrations via sensors 110 (e.g., on the feet, hands or headset depending on the location of the obstacle).

In some embodiments, VMTS 100 or one or more modules therein may utilize tracking of various user related metrics during a virtual mobility test and may use these metrics along with visual response data when analyzing the user performance and/or determining a related score. Such metrics may include heart rate metrics, eye tracking metrics, respiratory tracking metrics, neurologic metrics (e.g., what part of the brain is excited and where, when; through electroencephalogram (EEG) sensors), auditory response metrics (e.g., to determine how those relate to visual performance since individuals with visual deficits may have enhanced auditory responses); distance sensitivity metrics (e.g., using LIDAR to measure a user-perceived distance to an object). In some embodiments, VMTS 100 or one or more modules therein may utilize pupillary light reflexes (e.g., captured during pupil tracking) for providing additional information regarding consensual response (and the function of sensorineural pathways leading to this response) as well as emotional responses and sympathetic tone.

In some embodiments, a virtual mobility test or a related test presentation may be generated or modified for data capturing or related analyses. For example, VMTS 100 or one or more modules therein may administer a test to normally-sighted control individuals then may administer the test one or more subsequent times under conditions where the disease is mimicked through a user’s display (e.g., VR googles).

In some examples, a control population of normally-sighted individuals is used to compare responses with a set of individuals with a form of RP that can result in decreased light sensitivity, blurring of vision, and visual field defects. In some examples, a virtual mobility test that is given to normally- sighted individuals and those with RP may be the same or similar. After the virtual mobility test has been administered to the normally-sighted individuals, those individuals may be given another set of tests where the symptoms of RP are mimicked in presentation of the scene (e.g., via VR goggles). In some example, mimicking the symptoms in presentation may include significantly reducing the lighting in the virtual scene, blurring (e.g., Gaussian blurring) central vision in the virtual scene, blacking out or blurring patches of the peripheral vision fields in the virtual scene. In such examples, the performance of the individuals tested under conditions mimicking this disorder may be measured. The data under these conditions can be used as either a “control” group for virtual mobility performance or to control for the validity of the test.

In some examples, a control population of normally-sighted individuals is used to compare responses with a set of individuals with Stargardt disease that can result in poor visual acuity and poor color discrimination. In some examples, the virtual mobility test that is given to both normally-sighted individuals and those with Stargardt disease may incorporates a path defined by very large arrows and obstacles with colors that differ only slightly from the color of the background or colors used in the test may be in greyscale. After the virtual mobility test has been administered to the normally-sighted individuals, those individuals may be given another set of tests where the symptoms of Stargardt disease are mimicked in presentation of a virtual scene (e.g., via VR goggles). In some example, mimicking the symptoms in presentation may include blurring (e.g., Gaussian blurring) central vision when displaying the virtual scene. As such, the test may be easy for the normally- sighted individuals until their second round of testing. In such examples, the performance of the individuals tested under conditions mimicking this disorder may be measured. The data under these conditions can be used as either a “control” group for virtual mobility performance or to control for the validity of the test.

In some embodiments, conditions mimicking a vision condition or a related state can be inserted randomly or at defined moments while testing normally-sighted individuals. In some embodiments, a specific vision loss of a given patient could be emulated in a control patient. For example, data relating to visual fields, visual acuity, color vision, etc. that is measured in the clinic for a given user can be used to mimic this condition in the goggles for another user.

In some embodiments, various modifications of a virtual mobility test or a related scene presentation may be performed in order to mimic various visual conditions, including, for example, turning colors to greyscale, eliminating a specific color, presenting test objects only in a single color, showing gradient shading across objects or show mono-color shading, rendering meshes, showing edges of objects only, inverting or reversing images, rotating images, hiding shadows, or distorting perspective (e.g., making things appear closer or farther).

In some embodiments, VMTS 100 or one or more modules therein may utilize virtual mobility test or related presentations for education purposes. For example, VMTS 100 or one or more modules therein may generate and administer a virtual mobility test that mimics a given vision condition. In this example, mimicking a vision condition may involve affecting a presentation of a virtual mobility course, e.g., blurring and/or dimming a virtual scene shown in a VR headset so that a user with normal sight (e.g., 20/20 vision and no known vision conditions) experiences symptoms of the vision condition.

In some examples, a virtual mobility test that mimics a given vision condition may be given to caregivers, medical students, family members, social workers, policy makers, insurance providers, architects, educational testing administrators, traffic controllers, etc., thereby providing first-hand experience regarding the daily challenges faced by individuals with vision conditions. By experiencing visual disabilities in this manner, those individuals can better design living and working conditions for enhancing the safety and visual experience of those with various vision impairments.

In some embodiments, VMTS 100 or one or more modules therein may provide “light” version of a particular virtual mobility test and/or may utilize available technology for presenting the test. For example, a “light” version of a particular VR-based virtual mobility test may be generated or adapted for an augmented reality (AR) experience on a smartphone or a tablet computer, e.g., when VR testing is not feasible or easily accessible. AR-based testing could assist remote or underserved populations or those that are isolated due to disease or economic factors. Such AR-based testing may be used in conjunction with telemedicine or virtual education.

In some embodiments, VMTS 100 or one or more modules therein may utilize various technologies, e.g., artificial intelligence and/or AR, for diagnostics and training. For example, the ability of some AR headsets to see the room as well as a virtual scene simultaneously (also referred to here as “inside outside viewing”) may be usable for incorporating a user’s real-world home life (or work-life) into a course that allows the user to practice and improve their navigation. In this example, AR-based courses can be useful for training individuals to better use their (poor) vision. In some examples, AR- based testing may be useful for in-home monitoring of a user’s current condition and/or progress. In such examples, by using AR-based testing and/or portable and easy-to-use hardware, the user’s vision function can still be monitored even in less than ideal environments or situations, such as pandemics. In some examples, using Al based algorithms and/or associated metrics, VMTS 100 or one or more modules therein may gather additional data to identify trends in user performance and also to train or improve the ability of the user to better use their impaired vision, e.g., by measuring or identifying progress. In such embodiments, using Al based algorithms and/or associated metrics, VMTS 100 or one or more modules therein may identify and improve aspects of a test or related presentation for various goals, e.g., improving diagnostics and training efficiency.

In some embodiments, VMTS 100 or one or more modules therein may allow a multi-user mode or social engagement aspect to virtual mobility testing. For example, VMTS 100 or one or more modules therein may administer a virtual mobility test to multiple users concurrently, where the users can interact and related interaction (or avoidance of collisions) can be measured and/or evaluated.

It will also be appreciated that the above described modules, components, and nodes in Figure 1 are for illustrative purposes and that features or portions of features described herein may be performed by different and/or additional modules, components, or nodes than those depicted in Figure 1 . It will also be appreciated that some modules and/or components may be combined and/or integrated. For example, user display 108 and processing platform 101 may be integrated into a single computing device, module, or system. For example, a VIVE VR system, a mobile computing device, or smartphone configured with appropriate VR software, hardware, and mobility testing logic may generate a virtual environment and may perform and analyze a mobility test in the virtual environment. In this example, the mobile computing device or smartphone may also display and record a user’s progress through the virtual mobility test.

Figures 2A-2B are diagrams 200-202 illustrating example templates for virtual mobility tests. Referring to diagram 200 of Figure 2A, a template editor user interface showing an example template is depicted. In some embodiments, a virtual environment and/or a related obstacle course may utilize a template generated using a program called Tiled Map Editor (http://www.mapeditor.org). In such embodiments, a user may select the File Menu, select the New Option, and then select the New Map Option (File Menu- >New->New Map) to generate a new map. In some embodiments, the user may configurable various aspects of the new map, e.g., the new map may be set to ‘orthogonal’ orientation, the tile format may be set to ‘CSV’, the tile render order may be set to ‘Left Up’.

In some embodiments, the tile size may be a width of 85 pixels (px) and a height of 85 px. The map size may be fixed, and the number of tiles may be user-configurable. In some embodiments, the dimensions may be set to a width of 5 tiles and a length of 10 tiles (e.g., 5 ft X 10 ft). In some embodiments, the name of the map may be selected or provided when the map is saved (e.g., File Menu->Save As).

In some embodiments, a tile set may be added to the template (File Menu->New->New Tile set). A tile set may include a number of tile types, e.g., basic path tiles are straight, turn left, and turn right. A tile set may also include variations of a tile type, e.g., a straight tile type may include a hanging obstacle tiles, button tiles, and step over tiles. In some embodiments, a tile set may also provide one or more colors, images, and/or textures for the path or tiles in a template. In some embodiments, a tile set may be named and a browse button may be used to select an image file source and appropriate tile width and height may also be inputted (e.g., 85 px for tile width and height).

In some embodiments, to place a tile on the map, select or click a tile from your tile set and then click on the square on which to place the selected tile. For example, to create a standard mobility test or related obstacle course, a continuous path may be created from one of the start tiles to the finish tile.

In some embodiments, after a template is created, the template may be exported and saved to a location for use by VMTS 100 or other entities (File Menu->Export As). For example, after exporting a template as a file name map.csv, the file may be stored in a folder along with a config. csv containing additional configuration information associated with a virtual mobility test. In this example, VMTS 100 or related entities may use the CSV files to generate a virtual mobility test.

In some embodiments, a configuration file (e.g., config. csv) may be used to add or remove courses and/or configure courses used in a virtual mobility test. Example configuration settings for a virtual mobility test are listed below:

• play_area_width and play_area_height o The value is the width and height of the VMTS’s active area in meters. This may be determined when the VMTS is configured with room setup.

• tile Jength o The value is the desired length and width of each tile in meters.

• swings_per_sec o The value is the indication of the number of seconds it takes for a swinging obstacle to make its full range of motion.

• subject_height o The value is the height of the user (test participant) in meters. (Some values in the configuration file may be a fraction of the user’s height. Changing this value may affect hanging_obstacle_height, arrow_height, low_height, med_height, high_height, med_obstacle_radius, and big_obstacle_radius.)

• hanging_obstacle_height o The value is the distance between the floor and the bottom of the obstacle. (The value may be a fraction of the height of the user.)

• arrow_height o The value is the distance between the guiding arrows and the floor. (The value may be a fraction of the height of the user.)

• low_height o The value is the distance between the center of low floating obstacles and the floor and height of low box obstacles. (The value may be a fraction of the height of the user.)

• med_height o The value is the distance between the center of medium floating obstacles and the floor and height of high box obstacles. (The value may be a fraction of the height of the user.)

• high_height o The value is the distance between the center of high floating obstacles and the floor. (The value may be a fraction of the height of the user.)

• small_obstacle_radius o The value is the radius of small floating obstacles. (The value may be a fraction of the height of the user.)

• med_obstacle_radius o The value is the radius of medium floating obstacles. (The value may be a fraction of the height of the user.)

• big_obstacle_radius o The value is the radius of large floating obstacles. (The value may be a fraction of the height of the user.)

• tiny_step o The value is the height of very small step-over obstacles. (The value may be a fraction of the height of the user.)

• small_step o The value is the height of small step-over obstacles. (The value may be a fraction of the height of the user.)

• big_step o The value is the height of big step-over obstacles. (The value may be a fraction of the height of the user.)

• huge_step o The value is the height of very big step-over obstacles. (The value may be a fraction of the height of the user.)

• box ength o The value is the width and depth of box obstacles. (The value may be a fraction of tile length.)

• parking meter o The height is 5 feet with the shape of a parking meter.

• open dishwasher (or cabinet) door o The door of a box-like dishwasher (or rectangular cabinet) may be open anywhere between a 5-90 degree angle and jut into the pathway.

• arrow_local_scale o The value indicates the local scale of the arrow (relative to tile length). A value of 1 is 100% of normal scale, which is one half the length of a tile.

• arrow uminance o The value indicates the luminance of guiding arrows in the scene (the virtual environment or course). The luminance may be measured in lumens (lux) and the maximum value may be operator-configurable or may be user display dependent.

• button uminance o The value indicates the luminance of all buttons in the scene (the virtual environment or course). The luminance may be measured in lux and the maximum may be operator- configurable or may be user display dependent.

• obstacle uminance o The value indicates the luminance of all obstacles (e.g., boxshaped, floating, swinging, or hanging obstacles) in the scene. The luminance may be measured in lux and the maximum value may be operator-configurable or may be user display dependent.

• foot uminance o The value indicates the luminance of the user’s hands and feet in the scene. The luminance may be measured in lux and the maximum value may be operator-configurable or may be user display dependent.

• finish_line_luminance o The value indicates the luminance of the finish line in the scene. The luminance may be measured in lux and the maximum value may be operator-configurable or may be user display dependent.

• num_courses o The value indicates the number of file names referenced in this configuration file. Whatever this value is, there should be that many file names ending in .csv following it, and each one of those file names should correspond to a .csv file that is in the same folder as the configuration file.

Referring to diagram 202 of Figure 2B, virtual objects usable for testing (e.g., for tagging by the user) are depicted. For example, when designing or configuring a virtual mobility test, a test operator may select one or more graphical representations of taggable virtual objects and may place or add the virtual object(s) to the virtual mobility test (e.g., via a template editor or a related graphical user interface (GUI).

It will also be appreciated that the above described files and data in Figures 2A-2B are for illustrative purposes and that VMTS 100 or related entities may use additional and/or different files and data than those depicted in Figures 2A-2B.

Figure 3 is a diagram 300 illustrating a user performing a virtual mobility test. In Figure 3, a test observer’s view is shown on the left panel and a test user’s view is shown on the right panel. Referring to Figure 3, the test user may be at the termination point of the course looking in the direction of a green arrow. In the VR scene, a test user’s head, eyes, hands and feet may appear white. The test observer’s view may be capable of displaying various views (e.g., first person view, overhead (bird’s eye) view, or a third person view) of a related obstacle course and may be adjustable on-the-fly. The test user’s view may also be capable of displaying various views and may be adjustable on-the-fly, but may default to the first-person view. The termination point may be indicated by black flag and the test user may mark the completion of the course by touching the flag with his/her favored hand. Alternatively, the test user may walk into the flag or past the flag. On the right panel, the user’s “touch” may be indicated with a red sphere.

It will be appreciated that Figure 3 is for illustrative purposes and that various virtual mobility tests may include additional and/or different features than those depicted in Figure 3.

Figure 4 is a diagram 400 illustrating example objects in a virtual mobility test. Referring to Figure 4, images A-F depict example objects that may include in a virtual mobility test. Image A depicts tiles and arrows showing the path (e.g., pointing forward, to the left, or to the right). In some embodiments, arrows may be depicted on the tiles (and not floating above the tiles). Image B depicts step-over obstacles and arrows showing the required direction of movement. Image C depicts box-shaped obstacles. Image D depicts small floating obstacles (e.g., a group of 12) at different levels of a user’s body (e.g., from ankle to head height). Image E depicts large floating obstacles (e.g., a group of 10) at different levels of a user’s body (e.g., from ankle to head height). Image F depicts obstacles that a user must step on (e.g., to mimic stairs, rocks, etc.). These obstacles may depress (e.g., sink into the floor or tiles) as the user steps on them. In some embodiments, arrows may be depicted on the tiles (and not floating above or adjacent to the tiles).

It will be appreciated that Figure 4 is for illustrative purposes and that other objects may be used in a virtual mobility test than those depicted in Figure 4. For example, a virtual mobility test may include an obstacle course containing small, medium, and large floating obstacles, parking meter-shaped posts, or doors (e.g., partially open dishwasher doors) or gates that jut into the path.

Figure 5 is a diagram 500 illustrating various sized obstacles in a virtual environment. In some embodiments, a virtual mobility test or related objects therein may be adjustable. For example, VMTS 100 or one or modules therein may scale or resize obstacles based on a test user’s height or other physical characteristics. In this example, scalable obstacle courses may be useful for comparisons of performance of individuals who differ in height as the user’s height (e.g., distance of the eyes to the objects on the ground) affects visual resolution (e.g., visual acuity). The ability to resize objects in a virtual mobility test is also useful for following the visual performance of a child over time, e.g., as the child will grow and become an adult. In some embodiments, scaling an obstacle course may also be useful to ensure that obstacles or elements in the virtual environment (e.g., tiles that make of a course segments) are sized appropriately (e.g., so that a user’s foot can fit along an approved path through the virtual obstacle course).

Referring to Figure 5, images A-C depict various sized obstacles from an overhead view. For example, image A depicts an overhead view of small floating obstacles, image B depicts an overhead view of medium floating obstacles, and image C depicts an overhead view of large floating obstacles.

It will be appreciated that Figure 5 is for illustrative purposes and that objects may be scaled in more precise terms and/or with more granularity (e.g., a percentage or fraction of a test user’s height). For example, a virtual mobility test may include an obstacle course containing obstacles that appear to be 18.76% of the height of a test user. Figure 6 is a diagram 600 illustrating virtual mobility tests with various lighting conditions. In some embodiments, lighting conditions in a virtual mobility test may be adjustable. For example, VMTS 100 or one or more modules therein may adjust lighting conditions for a virtual environment or related obstacle course associated with a virtual mobility test. For example, VMTS 100 or one or more modules therein may adjust luminance of various objects (e.g., obstacles, path arrows, hands, head, and feet, finish line, and/or floor tiles) associated with a virtual mobility test.

In some embodiments, individual obstacles and/or groups of obstacles can be assigned different luminance, contrast, shading, outlines, and/or color. In some embodiments, each condition or setting may be assigned a relative value or an absolute value. For example, assuming luminance can be from 0.1 lux to 400 lux, a first obstacle can be displayed at 50 lux and a second obstacle can be assigned to a percentage of the first obstacle (e.g., 70% or 35 lux). In this example, regardless of a luminance value, some objects in a virtual mobility test may have a fixed luminance (e.g., a finish flag).

Referring to Figure 6, images A-C depict a mobility test under different luminance conditions with arrows highlighted for illustrative purposes. For example, image A shows a mobility test displayed under low luminance conditions (e.g., about 1 lux); image B shows a mobility test with a step obstacle displayed under medium luminance conditions (e.g., about 100 lux); and image C shows a mobility test with a step obstacle and other objects displayed under high luminance conditions (e.g., about 400 lux).

It will be appreciated that Figure 6 is for illustrative purposes and that different and/or additional aspects of the virtual mobility test than those depicted in Figure 6.

Figures 7A-7B are diagrams 700-702 illustrating example data captured during a virtual mobility test. In some embodiments, VMTS 100 or one or more modules therein (e.g., TC 102 and/or sensor data collector 104) may analyze various data associated with a virtual mobility test. For example, VMTS 100 or modules therein may gather data from sensors 110, information regarding the virtual environment (e.g., locations and sizes of obstacles, path, etc.), and/or one or more video cameras. In this example, the data captured may be measured and analyzed using quantitative analysis (e.g., based on objective criteria).

Referring to diagram 700 of Figure 7A, captured data may be stored in one or more files (e.g., test_events.csv and test_scene.csv files). Example captured data may include details of a virtual mobility test (e.g., play area, etc.), a particular configuration of the course, a height of a test user, sensor locations (e.g., head, hands, feet) as a function of time (e.g., indicative of body movements), direction of gaze, acceleration and deceleration, leaning over to look more closely at an object, and/or amount of time interacting with each obstacle.

Referring to diagram 702 of Figure 7B, captured data for a recorded test or scene may be stored in one or more files (e.g., recorded_scene_3-2- 2022_12-8.csv files). Example captured data may include details a user’s interactions in a virtual mobility test (e.g., user data, object information, object interaction data, controller input, and/or other data).

It will be appreciated that Figures 7A-7B are for illustrative purposes and that different and/or additional data than those depicted in Figures 7A-7B may be captured or obtained during a virtual mobility test.

Figures 8A-8B are diagrams 800-802 illustrating various aspects of example virtual mobility tests. In some embodiments, VMTS 100 or one or more modules therein may be capable of providing real-time or near real-time playback of a user’s performance during a virtual mobility test. In some embodiments, VMTS 100 or one or more modules therein may be capable of recording a user’s performance during a virtual mobility test. For example, VMTS 100 or modules therein may use gathered data from sensors 110, and/or other input, to create an avatar representing the user in the virtual mobility test and may depict the avatar interacting with various objects in the virtual mobility test. For example, a video or playback of a user performing a virtual mobility test may depict the user’s head, eyes, hands, and feet appear white on the video and can depict the user walking through an obstacle course toward the termination point (e.g., a finish flag) of the course. In this example, to emphasize the user bumping into the hanging sign and stepping backwards, a green arrow may point to a red circle located at the location of the collision.

Referring to diagram 800 of Figure 8A, a snapshot from a playback of a user performing a virtual mobility test is shown. In the snapshot, start location 802 represents the start of an obstacle course; avatar 804 represents the user’s head, hands, and feet; floating obstacle 806 represents a head height obstacle in the obstacle course (e.g., one way to avoid such an obstacle is to duck); indicators 808 represent a collision circle indicating where a collision between the user and the virtual environment occurred and an arrow pointing to the collision circle; and finish location 810 represents the end of the obstacle course.

Referring to diagram 802 of Figure 8B, image A shows virtual objects (such as table, wet floor sign, cabinet with open door, skateboard on floor, ceiling fan) of a virtual mobility test along with virtual representations of a user’s headset and hand-held controllers. For example, as shown, various virtual objects may be on or near a path lit by red arrows in a virtual scene. In this example, the virtual objects may be presented in light of prescribed light intensities. Image B of diagram 802 depicts a virtual scene with virtual objects and path shown (virtual representations of a user’s headset and a hand-held controllers).

It will be appreciated that Figures 8A-8B are for illustrative purposes and that different and/or additional aspects than those depicted in Figures 8A- 8B may be part of a virtual mobility test.

Figures 9A-9E depict graphs indicating various data gathered from subjects in a study associated with counting tagged virtual objects during virtual mobility testing. The study evaluated 32 subjects, eleven normally- sighted control subjects and 21 subjects with various vision conditions, e.g., retinal degeneration or RP.

Referring to Figure 9A, a graph 900 may be a box plot indicating counts of tagged virtual objects for subjects, eyes of subjects, or groups of subjects (e.g., a normal both eyes (OU) grouping) during an initial VR test using a luminosity of 0.02 cd/m 2 . As depicted, graph 900 shows that normally-sighted subjects tagged about 9 virtual objects during testing, while subjects with vision conditions generally tagged less than 9 virtual objects. The results depicted in graph 900 reveal whether or not a subject has retinal disease in the vast majority of the subjects (e.g., 19 out of the 21 afflicted). In generally, subjects with retinal disease identified significantly fewer virtual objects during testing than normally-sighted subjects.

Virtual mobility testing involving tagging virtual objects may be usable to correlate testing performance and clinical measures. For example, when testing for RP or retinal degeneration, a virtual mobility test may be configured to use a luminosity of 0.02 cd/m 2 with a "wraparound" filter for testing visual fields. Such a filter can reduce the light intensity further (for example, in 1.6 log unit increments). In this example, subjects with retinal degeneration (who performed well in the initial test represented by Figure 9A) may be identified using this additional test. Continuing with this example, test performance can be quantified using each eye alone or both eyes together.

In some embodiments, a virtual mobility test or a course thereof may be presented at step-wise increases of four standard luminance conditions (-0.67, -0.19, +0.39, and +0.69 log phot.cd.m-2). For example, for one test, an initial luminance step (-0.67 log phot.cd.m-2) may be usable for defining if additional attenuation of the virtual environment is needed (e.g., since normal subjects may detect all objects at that luminance level). In this example, low luminance range, neutral density (ND) filters may be added for extending the operating range toward lower luminances with the addition of one 1.5 log unit ND filter in patients with a vision related condition (e.g., an inherited retinal degenerations (IRD), or a sandwich of two ND filters for normal subjects, which brings the total of possible luminance steps from 4 to 8 or 12, with the addition or one or two ND filters, respectively. In some embodiments, each user may perform up to three runs per luminance level. For example, repeated runs per luminance level may provide information regarding intra-session test-retest variability, and may allow assessment for learning effects that may still exist after the initial training session. In this example, for each run, a different course configuration is used in order to minimize potential learning effect. Continuing with this example, after testing both eyes simultaneously, virtual mobility testing may then be repeated through the entire luminance range for each eye individually.

Referring to Figure 9B, a graph 902 may be a box plot indicating counts of tagged objects for subjects, eyes of subjects, or groups of subjects (e.g., a normal both eyes (OU) grouping) during a second VR test using a luminosity less than 0.02 cd/m 2 As depicted, graph 902 shows that normally-sighted subjects tagged about 8 virtual objects during testing, while two subjects having bilateral disease (that were not identified in the first test) tagged 6 or less virtual objects.

Referring to Figure 9C, a graph 904 may be a box plot indicating counts of tagged virtual objects for subjects, eyes of subjects, or groups of subjects (e.g., a normal both eyes (OU) grouping) during a VR test using a total luminosity of 0.0005 cd/m 2 (combined luminosity of the headset presentation at 0.02 cd/m 2 and a "wraparound" filter). As depicted, graph 904 shows a subject with unilateral RP tagging significantly less objects when the right eye (OD) is tested only than when the subject is tested with the left eye (OS) only or with both eyes (OU). The results depicted in graph 904 may correlate with clinical measures associated with the subject. For example, clinical measures for the subject with unilateral RP may indicate the right eye (OD) is affected, the right eye has a visual field of less than 30 degrees, the condition is related to cone-mediated function, and the left eye of the subject has normal rod function.

Referring to Figure 9D, a graph 906 may be a box plot indicating counts of tagged virtual objects for eyes of subject 'VR61 ' having RPE65 gene mutations. As depicted, graph 906 shows four boxes for each eye, where each bar represents a virtual mobility test of a different luminosity (e.g., increasing from left to right). The results depicted in graph 906 indicate that the worst performance (e.g., lowest count of tagged virtual objects) for each eye is the test having the lowest luminosity, but that performance improves up to or near the level of normally-sighted subjects with increasing light level (e.g., luminosity). The results depicted in graph 906 also indicate that the right eye of subject 'VR61 ' is worse than the left eye of subject 'VR61 ' for every respective test at the same luminosity. By performing tests with increasing luminosity, the testing can identify threshold light intensity for identifying objects.

Referring to Figure 9E, a graph 908 may indicate counts of tagged virtual objects for subject 'VR69' having RPE65 gene mutations. As depicted, graph 908 shows data sets for three test configurations (e.g., both eyes (OU), right eye (OD), and left eye (OS)), where each data set include data from tests with different luminosities (0.02 cd/m 2 -> 0.47 cd/m 2 ). The results depicted in graph 908 indicate that performance for the left eye alone and both eyes configurations improve significantly from the test with the least luminosity (0.02 cd/m 2 ) to the test with the second least luminosity (0.07 cd/m 2 ) and that no improvement is seen for these configurations from the test with second highest luminosity (0.22 cd/m 2 ) to the highest luminosity (0.47 cd/m 2 ). The results depicted in graph 908 also indicate that the right eye (OD) of subject 'VR69' was unable to identify objects at any of the luminosities (e.g., light intensities) employed.

Figures 10A-10B depict vision field diagrams associated with subjects having RPE65 gene mutations. Referring to Figure 10A, a diagram 1000 may depict perimetry results from Goldmann visual field (GVF) testing of a left eye of subject 'VR61 '. This individual’s virtual reality performance (# Objects Tagged) is graphed in diagram 906 of Figure 9D. While this individual’s visual field is reduced compared to normal (see the depiction of normal GVF in the bottom right of Figure 10A), it is sufficient to perform well on this test as long as luminance is at or above the 0.07 cd/m 2 threshold.

Referring to Figure 10B, a diagram 1002 may depict perimetry results from GVF testing of a left eye of subject 'VR69'. As depicted, the visual field of subject 'VR69' is composed of a few small islands; a field that is distinctly abnormal as compared to the depiction of normal GVF in the bottom right of Figure 10A. This individual’s virtual reality performance (# of Objects Tagged) is graphed in diagram 908 of Figure 9E. Using this small visual field, the subject is able to perform the virtual reality test with the left eye although tags only ~half of the objects. This subject’s right eye suffered from optic nerve atrophy and he was unable to tag any of the objects on the virtual reality test using that eye (see Figure 9E). Figure 11 is a flow chart illustrating an example process 1100 for assessing visual function using a virtual environment. In some embodiments, example process 1100 described herein, or portions thereof, may be performed at or performed by VMTS 100, processing platform 101 , TC 102, sensor data collector 104, user display 108, and/or another module or node.

Referring to example process 1100, in step 1102, a virtual mobility test in a virtual environment for testing visual function of a user may be provided via a display. For example, VMTS 100 may use configuration files containing settings and/or configuration information for configuring a virtual mobility test or a related obstacle course based on the user and/or a related vision condition. In this example, after configuring the virtual mobility test, VMTS 100 may generate and display the virtual mobility test to user display 108.

In step 1104, virtual objects for intentional tagging by the user may be displayed during the virtual mobility test. For example, VMTS 100 or related entities may display a predetermined number of virtual objects (e.g., "wet" floor signs, tables, swinging pendulums, "shooting" stars, wall clocks, trashcans, boxes, etc.) at different locations near or on a path associated with a virtual mobility test.

In step 1106, a number of tagged virtual objects tagged by the user may be counted during or after the virtual mobility test. In some embodiments, a virtual object may be tagged via a tagging procedure, wherein the tagging procedure may include interacting, by the user or a virtual avatar of the user, with an untagged virtual object in the virtual environment and providing input via a physical input device.

In some embodiments, counting the number of tagged virtual objects may include sorting and/or analyzing the tagged virtual objects and untagged virtual objects by size, shape, location and/or other characteristics. For example, when assessing a user’s test performance or their related visual function, VMTS 100 or related entities may sort tagged virtual objects into various groups, e.g., based on size, shape, location, etc. In this example, VMTS 100 or related entities may also sort untagged virtual objects into various groups, e.g., based on size, shape, location, etc. Continuing with this example, VMTS 100 or related entities may use information (e.g., smallest size or farthest location consistently tagged) derived from the characteristics of tagged virtual objects (and/or untagged virtual objects) to identify vision defects or vision conditions.

In some embodiments, interacting with an untagged virtual object may include touching the untagged virtual object, pointing at the untagged virtual object, or identifying the untagged virtual object.

In some embodiments, providing input via a physical input device may include pressing a button, pressing a sequence of buttons, inputting identification information, and/or moving a joystick or directional input control.

In some embodiments, a physical input device may include a game controller, a remote controller, a keyboard, a wireless handheld device, a wired handheld device, or a button device.

In step 1108, the visual function of the user may be assessed using the count of tagged virtual objects tagged by the user during the virtual mobility test.

In some embodiments, assessing a visual function of a user using a count of tagged virtual objects may include weighting the count of tagged virtual objects or weighting each of the tagged virtual objects based on environmental attributes, wherein the environmental attributes may include luminance, shadow, color, contrast, gradients of contrast or color on the surface of one or more of the tagged virtual objects, reflectance or color of borders of one or more of the tagged virtual objects, or a lighting condition associated with one or more of the tagged virtual objects, a height of one or more of the tagged virtual objects, a size of one or more of the tagged virtual objects, or a motion or speed of one or more of the tagged virtual objects. For example, when weighting tagged virtual objects, larger virtual object (e.g., that are easier to detect) may be weighted to be less than an smaller tagged virtual objects (e.g., that are harder to detect). In another example, a fast moving virtual object (e.g., swinging globe or ball) may be weighted more than a slower moving or static virtual object (a ball on the floor) of the same size.

In some embodiments, assessing a visual function of a user using a count of tagged virtual objects may include comparing the count of tagged virtual objects to a second count of tagged virtual objects associated with a person or a population that has or does not have a vision condition.

In some embodiments, assessing a visual function of a user using a count of tagged virtual objects may include comparing a computed score associated with the count of tagged virtual objects to a second computed score associated with a second count of tagged virtual objects associated with a person or a population that has or does have not a vision condition.

In some embodiments, a display (e.g., for providing a virtual mobility test) may include an immersive or interactive display system and wherein the virtual environment may include an XR environment, an AR environment, an MR environment, or a VR environment.

In some embodiments, auditory or haptic feedback may be provided (e.g., via an XR headset, like user display 108) or to a user (e.g., of VTMS 100) when a feedback condition (e.g., a successful tagging procedure or an unsuccessful tagging procedure occurs. For example, auditory feedback for a successful tagging procedure may include a chime and/or a voice saying "tagging successful", while auditory feedback for an unsuccessful tagging procedure may include a bong and/or a voice saying "tagging unsuccessful, please try again". In another example, haptic feedback for a successful tagging procedure may be a short, pleasant buzz, while haptic feedback for a unsuccessful tagging procedure may be a longer, less pleasant rumble.

In some embodiments, virtual objects in a virtual mobility test may include a tile, an obstacle, a box object, a step-over object, a hanging or swinging object, a floating object, a moving object, a flag, a guide arrow, or a button object.

In some embodiments, assessing a visual function of a user may include assessing the user’s ability to tag the virtual objects in a proper or predetermined sequence during the virtual mobility test. For example, a virtual mobility test may display ten of virtual objects, each virtual object may be depicted with a unique number between 1 and 10 (e.g., on a surface of the virtual object or near it). In this example, a correct or proper tagging sequence may involving a user to tag each virtual object in numerical order (e.g., from 1 to 10). Continuing with this example, when assessing the user’s test performance or their related visual function, VMTS 100 or related entities may generate a score, where each tagged object is worth 10 points if tagged when expected (e.g., the ‘9’ virtual object is tagged after the ‘8’ virtual object and before the ‘10’ virtual object), but may be worth less points (e.g., 8 points) if not.

In some embodiments, assessing a visual function of a user may include assessing an amount of time that it takes for the user to recognize one or more configuration parameters of the virtual mobility test, wherein the one or more configuration parameters include a start indicator, a start location, or a direction of a path of the virtual mobility test. For example, when a virtual environment of a virtual mobility test is initialized or first displayed in a user’s headset, VMTS 100 or related entities may compute how long it takes for the user to move to a start location (e.g., a starting line) and to orient (e.g., face toward the direction indicated by a displayed path of the virtual mobility test). In this example, when assessing the user’s test performance or their related visual function, VMTS 100 or related entities may reduce a performance score if the amount of time it takes the user to move to a start location and to orient exceeds a threshold value (e.g., 30 seconds).

In some embodiments, assessing a visual function of a user may include assessing the user’s ability to perform a plurality of visual tasks concurrently. For example, a virtual mobility test may involve a first visual task of following a path of the virtual mobility test and a second visual task of tagging a plurality virtual objects. In this example, when assessing the user’s test performance or their related visual function, VMTS 100 or related entities may score each visual task separately and may then use a formula and the two scores (e.g., compute an average or a weighted average of the two scores) to generate a total score indicating the user’s test performance.

In some embodiments, each visual task of a plurality of visual tasks associated with a virtual mobility test may include one or more independent variables (e.g., virtual element attributes, coloring settings, lighting conditions etc.) that affect at least one visual element (e.g., a virtual object, a path indicator (like an arrow, one or more dashed lines, a square, etc.), a boundary indicator (e.g., a line or wall), etc.) associated with the respective visual task. In some embodiments, independent variables may affect size, shape, location, color, luminance, shadow, color, contrast, light intensity, or reflectivity of the at least one visual element. For example, where a first visual task of a virtual mobility test involves staying on a path or walking a path, the path may be indicated by arrows, dashed lines, squares, etc. and may be presented in one or more colors (e.g., white, black, or other colors) and may use different levels of brightness or contrast. In this example, where a second visual task of the visual mobility test involves tagging virtual objects, the virtual objects may utilize different values for various variables (e.g., visual attributes), e.g., in one test a path may be displayed as yellow arrows at a predetermined light intensity and each virtual object may be displayed as black and white objects under a predetermined contrast level.

In some embodiments, testing may be done in a multi-step fashion in order to isolate the role of central vision versus peripheral vision. For example, a virtual mobility test or a related test may be configured to initially identify a luminance threshold value for the user to identify colored (red, for example) arrows on the path. This luminance threshold value may then be held constant in subsequent tests for central vision while luminance of the obstacles is modified in order to elicit the sensitivity of the user’s peripheral vision.

It will be appreciated that process 1100 is for illustrative purposes and that different and/or additional actions may be used. It will also be appreciated It should be noted that VMTS 100 and/or functionality described herein may constitute a special purpose computing device. Further, VMTS 100 and/or functionality described herein can improve the technological field of eye treatments and/or diagnosis. For example, by generating and using a virtual mobility test, a significant number of benefits can be achieved, including the ability to assess visual function of a user quickly and easily without requiring expensive and time-consuming setup (e.g., extensive lighting requirements) needed for performing conventional mobility test. In this example, VMTS 100 and/or functionality described herein can also use a count of tagged virtual objects during one or more virtual mobility tests (e.g., a series of tests with increasing or decreasing luminosity) to more effectively and more objectively assess the visual function of user. The details provided here can be applicable to XR or AR systems which could be delivered through glasses, thus facilitating usage.

It may be understood that various details of the subject matter described herein may be changed without departing from the scope of the subject matter described herein. Furthermore, the foregoing description is for the purpose of illustration only, and not for the purpose of limitation, as the subject matter described herein is defined by the claims as set forth hereinafter.