Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SENSORY GAMMA STIMULATION THERAPY IMPROVES SLEEP QUALITY AND MAINTAINS FUNCTIONAL ABILITY IN ALZHEIMERS DISEASE PATIENTS
Document Type and Number:
WIPO Patent Application WO/2022/027030
Kind Code:
A1
Abstract:
Systems and methods of the present disclosure are directed to neural stimulation via audio and visual stimulations. The combination and/or sequence of audio and visual brain stimulations can adjust, control or otherwise manage the frequency of the neural oscillations to provide beneficial effects to one or more cognitive states or cognitive functions of the brain, while mitigating or preventing adverse consequences on a cognitive state or cognitive function that stems from sleep deprivation. In doing so, the present systems and methods can reduce sleep fragmentation, improve sleep quality, and slow the progression of cognitive decline in a subject with Alzheimer's disease and MCI.

Inventors:
MALCHANO ZACHARY (US)
CIMENSER AYLIN (US)
WILLIAMS MARTIN (US)
HAJÓS MIHÁLY (US)
Application Number:
PCT/US2021/071003
Publication Date:
February 03, 2022
Filing Date:
July 27, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
COGNITO THERAPEUTICS INC (US)
MALCHANO ZACHARY JOHN HAMBRECHT (US)
CIMENSER AYLIN (US)
WILLIAMS MARTIN (US)
HAJOS MIHALY (US)
International Classes:
A61B5/377; A61B3/113; A61B5/024; A61B5/378; A61B5/38; A61M21/00; H05B47/105
Foreign References:
CN106725462A2017-05-31
US20190314641A12019-10-17
CN105278387A2016-01-27
Attorney, Agent or Firm:
BEAUSOLEIL, Lauren (US)
Download PDF:
Claims:
CLAIMS

WHAT IS CLAIMED IS:

1. A method of improving a sleep quality experienced by a subject, the method of improving a sleep quality comprising administering an audio and a visual stimulus to the subject at a frequency effective to reduce sleep fragmentation.

2. The method of claim 1, wherein the frequency is between 20 and 60 Hertz.

3. The method of claim 1, wherein the frequency is about 40 Hertz.

4. The method of claim 1, wherein the method of improving sleep comprises reducing a duration of nighttime active periods experienced during sleep.

5. The method of claim 1, wherein the method of improving sleep comprises reducing a number of nighttime active periods experienced during sleep.

6. The method of claim 1, wherein the method of improving sleep comprises increasing a duration of slow wave sleep or a duration of rapid eye movement sleep experienced by the subject.

7. The method of claim 1, wherein the subject has Alzheimer’s disease.

8. The method of claim 1, wherein the subject has Mild Cognitive Impairment.

9. The method of claim 4, wherein reducing the duration of nighttime active periods comprises reducing the duration of active periods by at least half.

10. A method of prolonging nighttime undisturbed restful periods in a subject, the method of prolonging nighttime undisturbed restful periods comprising administering a noninvasive sensory stimulus comprising audio stimulus and visual stimulus to the subject at a frequency effective to induce synchronized gamma oscillations in at least one brain region of the subject.

11. The method of claim 10, wherein the method comprises reducing the amyloid beta burden in the at least one brain region of the subject.

12. The method of claim 10, wherein the method comprises reducing the frequency of nighttime active periods experienced by the subject.

13. The method of claim 10, wherein the method comprises increasing the duration of slow wave sleep experienced by the subject.

14. The method of claim 10, wherein the method comprises increasing the duration of rapid eye movement sleep experienced by the subject.

15. The method of claim 10, wherein the method comprises reducing the duration of nighttime active periods experienced by the subject.

16. The method of claim 10, wherein the method is repeated regularly.

17. The method of claim 10, wherein the subject has Alzheimer’s disease or Mild Cognitive Impairment.

18. The method of claim 17, wherein the method further comprises slowing the progression of cognitive impairment associated with Alzheimer’s disease.

19. A method of treating a sleep disorder in a subj ect in need thereof, the method of treating the sleep disorder comprising administering an audio stimulus and a visual stimulus at a frequency effective to improve brain wave coherence.

20. The method of claim 19, wherein the frequency effective to improve brainwave coherence is between 5 and 100 Hertz.

21. The method of claim 19, wherein the frequency effective to improve brainwave coherence is about 40 Hertz.

22. The method of claim 19, wherein the subject is at risk of developing Alzheimer’s Disease.

23. The method of claim 19, wherein the sleep disorder comprises insomnia.

24. The method of claim 19, wherein the subject experiences diminished slow wave sleep, reduced rapid eye movement sleep, or a combination thereof.

25. The method of claim 19, wherein the sleep disorder worsens a cognitive function.

Description:
SENSORY GAMMA STIMULATION THERAPY IMPROVES SLEEP QUALITY AND MAINTAINS FUNCTIONAL ABILITY IN ALZHEIMERS DISEASE

PATIENTS

CROSS-REFERENCE

[0001] This application claims the benefit of U.S. Provisional Patent Application No. 63/057,121, filed July 27, 2020, and U.S. Provisional Patent Application No. 63/143,481, filed January 29, 2021, each of which is incorporated herein by reference in its entirety.

INCORPORATION BY REFERENCE

[0002] Each patent, publication, and non-patent literature cited in the application is hereby incorporated by reference in its entirety as if each was incorporated by reference individually.

BACKGROUND

[0003] Neural oscillation occurs in humans or animals and includes rhythmic or repetitive neural activity in the central nervous system. Neural tissue can generate oscillatory activity by mechanisms within individual neurons or by interactions between neurons. Oscillations can appear as either oscillations in membrane potential or as rhythmic patterns of action potentials, which can produce oscillatory activation of post-synaptic neurons. Synchronized activity of a group of neurons can give rise to macroscopic oscillations, which can be observed by electroencephalography (“EEG”). Neural oscillations can be characterized by their frequency, amplitude and phase. Neural oscillations can give rise to electrical impulses that form a brainwave. These signal properties can be observed from neural recordings using time-frequency analysis.

[0004] Alzheimer’s disease (AD) is a progressive neurodegenerative illness with long preclinical and prodromal phases, resulting in cognitive dysfunction, behavioral abnormalities, and impaired performance of activity of daily living. It has been well- established that hallmarks of AD-related pathological proteins, such as Ab oligomers and hyperphosphorylated tau (h-tau) disrupt normal neuronal functions in the brain, however a recent hypothesis suggests that abnormal neuronal activity directly contributes to the pathogenesis of the disease. In fact, induction of synchronized gamma oscillation of neuronal networks by optogenetic or sensory stimulation effectively reverses AD-related pathological markers, such as Ab and h-tau in transgenic mice carrying AD-related human pathological genes.

[0005] Sleep disorders are more frequent and more severe in Mild Cognitive

Impairment (MCI) and AD patients compared to cognitively normal older adults. Sleep disorders in MCI and AD patients are well recognized, having a 35-60 % prevalence of some form(s) of sleep abnormalities. One of the main complaints about sleep of AD patients is excessive nocturnal awakenings. Accordingly, polysomnographic (PSG) studies report abnormal sleep architecture with diminished slow wave sleep (SWS) and reduced rapid eye movement (REM) sleep not only in advanced AD patients, but early MCI or prodromal stage patients as well. Furthermore, PSG studies show also structural changes from seconds (K- complex, spindle morphology) to minutes/hours (sleep cycles) scale such that even distinguishing traditionally established sleep stages could be challenging.

[0006] Accumulating clinical data demonstrate a strong, bidirectional connection between sleep disorders and disease progression in AD, indicating a vicious circle contributing to AD progression. It has been found that sleep disorders are associated with greater AD pathology in cognitively normal elderly subjects, indicated by AD-related cerebrospinal fluid biomarkers (both Ab and tau) and markers of neuroinflammation/astroglial activation. Using 18F-florbetaben PET imaging, it has been also shown that sleep deprivation in healthy subjects resulted in a significant increase in brain Ab burden. Furthermore, sleep-deprivation was also associated with tau pathology in early AD. However, it is also well documented that AD-related pathomechanisms, such as Ab disrupt sleep and hippocampal-dependent memory consolidation. In line with these observations, recent experimental and epidemiological findings demonstrate that sleep disorders represent a risk for developing AD, and a close correlation exists between sleep disorders and decline in cognitive function and activity of daily living of AD patients.

[0007] Moreover, because sleep disturbances can have broad behavioral effects, targeting sleep improvement is an important aspect of therapeutic strategies for subjects with AD. Furthermore, in AD patients as well as broader populations, improvements in sleep quality and/or brain wave coherence can have direct beneficial effects ranging from the enhancement of brain processes clearing toxic metabolites and misfolded proteins, to the improvement or maintenance of performance, mood, and wellbeing. In fact, sleep disorders are considered as a major risk factor for early institutionalization of patients. Given the well- recognized architecture of human physiological sleep, consisting of periods of different types of sleep in a strictly subsequent order, sleep fragmentation disrupts sleep architecture and consequently sleep quality. Sleep abnormalities, such as sleep fragmentation have multiple impacts on human physiology, including dysfunction not only in the nervous system, but also impairing body metabolism or immune defense system. Furthermore, decremental cognitive impacts of sleep abnormalities are particularly worrisome in AD patients whose cognitive performance is already diminished by the disease. Additionally, sleep fragmentation can worsen patients’ affective function, aggravating depression or agitation.

SUMMARY

[0008] In one aspect, herein is provided a method of improving a sleep quality experienced by a subject, the method of improving a sleep quality comprising administering an audio and a visual stimulus to the subject at a frequency effective to reduce sleep fragmentation. In some aspects, the frequency is between 20 and 60 Hertz. In some aspects, the frequency is about 40 Hertz. In one aspect, the method of improving sleep comprises reducing a duration of nighttime active periods experienced during sleep. In some aspects, reducing the duration of nighttime active periods comprises reducing the duration of active periods by at least half. In a further aspect, the method of improving sleep comprises reducing a number of nighttime active periods experienced during sleep. In other aspects, the method of improving sleep comprises increasing a duration of slow wave sleep or a duration of rapid eye movement sleep experienced by the subject. In some aspects, the subject has Alzheimer’s disease. In some aspects, the subject has Mild Cognitive Impairment.

[0009] In another aspect, herein is provided a method of prolonging nighttime undisturbed restful periods in a subject, the method of prolonging nighttime undisturbed restful periods comprising administering a noninvasive sensory stimulus comprising audio stimulus and visual stimulus to the subject at a frequency effective to induce synchronized gamma oscillations in at least one brain region of the subject. In some aspects, the method comprises reducing the amyloid beta burden in the at least one brain region of the subject. In other aspects, the method comprises reducing the frequency of nighttime active periods experienced by the subject. In one aspect, the method comprises increasing the duration of slow wave sleep experienced by the subject. In other aspects, the method comprises increasing the duration of rapid eye movement sleep experienced by the subject. In one aspect, the wherein the method comprises reducing the duration of nighttime active periods experienced by the subject. In some aspects, the method is repeated regularly. In another aspect, the subject has Alzheimer’s disease or Mild Cognitive Impairment. In some aspects, the method further comprises slowing the progression of cognitive impairment associated with Alzheimer’s disease.

[0010] In further aspects, the present disclosure provides a method of treating a sleep disorder in a subject in need thereof, the method of treating the sleep disorder comprising administering an audio stimulus and a visual stimulus at a frequency effective to improve brain wave coherence. In some aspects, the frequency effective to improve brainwave coherence is between 5 and 100 Hertz. In some aspects, the frequency effective to improve brainwave coherence is about 40 Hz. In further aspects, the subject is at risk of developing Alzheimer’s Disease. In some aspects, the sleep disorder comprises insomnia. In one aspect, the subject experiences diminished slow wave sleep, reduced rapid eye movement sleep, or a combination thereof. In another aspect, the sleep disorder worsens a cognitive function.

BRIEF DESCRIPTION OF THE DRAWINGS [0011] FIGURE 1 illustrates a block diagram depicting a system to perform neural stimulation via visual stimulation in accordance with an embodiment [0012] FIGURES 2A-2F illustrate visual stimulation signals that cause neural stimulation in accordance with some embodiments.

[0013] FIGURES 3A-3C illustrate fields of vision in which visual signals can be transmitted for visual brain entrainment in accordance with some embodiments.

[0014] FIGURES 4A-4C illustrate devices configured to transmit visual signals for neural stimulation in accordance with some embodiments.

[0015] FIGURES 5A-5D illustrate devices configured to transmit visual signals for neural stimulation in accordance with some embodiments.

[0016] FIGURES 6A AND 6B illustrate devices configured to receive feedback to facilitate neural stimulation in accordance with some embodiments.

[0017] FIGURES 7A and 7B are block diagrams depicting embodiments of computing devices useful in connection with the systems and methods described herein. [0018] FIGURE 8 is a flow diagram of a method of performing neural stimulation using visual stimulation in accordance with an embodiment.

[0019] FIGURE 9 is a block diagram depicting a system for neural stimulation via auditory stimulation in accordance with an embodiment.

[0020] FIGURE 10A-10I illustrate audio signals and types of modulations to audio signals used to induce neural oscillations via auditory stimulation in accordance with some embodiments. [0021] FIGURE 11A illustrates audio signals generated using binaural beats, in accordance with an embodiment.

[0022] FIGURE 11B illustrates acoustic pulses having isochronic tones, in accordance with an embodiment.

[0023] FIGURE 11C illustrates audio signals having a modulation technique including audio filters, in accordance with an embodiment.

[0024] FIGURES 12A-12C illustrate configurations of systems for neural stimulation via auditory stimulation in accordance with some embodiments.

[0025] FIGURE 13 illustrates a configuration for a system for room-based auditory stimulation for neural stimulation in accordance with an embodiment.

[0026] FIGURE 14 illustrates devices configured to receive feedback to facilitate neural stimulation via auditory stimulation in accordance with some embodiments.

[0027] FIGURE 15 is a flow diagram of a method of performing auditory brain entrainment in accordance with an embodiment.

[0028] FIGURE 16A is a block diagram depicting a system for neural stimulation via peripheral nerve stimulation in accordance with an embodiment.

[0029] FIGURE 16B is a block diagram depicting a system for neural stimulation via multiple modes of stimulation in accordance with an embodiment.

[0030] FIGURE 17A is a block diagram depicting a system for neural stimulation via visual stimulation and auditory stimulation in accordance with an embodiment.

[0031] FIGURE 17B is a diagram depicting waveforms used for neural stimulation via visual stimulation and auditory stimulation in accordance with an embodiment.

[0032] FIGURE 18 is a flow diagram of a method for neural stimulation via visual stimulation and auditory stimulation in accordance with an embodiment.

[0033] FIGURE 19 is an efficacy summary chart for the modified intent to treat

(mITT) population, including p-values, difference, confidence intervals (Cl), and a standardized estimate of efficacy based on the values.

[0034] FIGURE 20 shows the separate means analysis, on the left, and the linear model analysis, on the right, of the Alzheimer’s Disease composite score (ADCOMS) as optimized for mid and moderate Alzheimer’s Disease (MADCOMS) for the sham and active treatment groups.

[0035] FIGURE 21 shows the separate means analysis, on the left, and a linear model analysis, on the right, of the Alzheimer’s Disease Assessment Scale-Cognitive Subscale 14 (ADAS-Cogl4) values for the sham and active treatment groups. [0036] FIGURE 22 shows the separate means analysis, on the left, and a linear model analysis, on the right, of the Clinical Dementia Rating Sale Sum of Boxes (CDR-SB) values for the sham and active treatment groups.

[0037] FIGURE 23 shows the separate means analysis, on the left, and a linear model analysis, on the right, of the Alzheimer’s Disease Cooperative Study - Activities of Daily Living Scale (ADCS-ADL) scores for the sham and active treatment groups.

[0038] FIGURE 24 shows the linear model analysis of the Mini-Mental State

Examination (MMSE) score, as measured after six months of treatment (i.e., at the last time point).

[0039] FIGURE 25 shows the linear model analysis of magnetic resonance imaging

(MRI) results of whole brain volume value, on the left, and hippocampal volume, on the right, after six months of treatment.

[0040] FIGURE 26 is a table depicting a summary of efficacy findings resulting from the human clin-ical trial, including p-values, treatment differences, Cl values and the percentage of slowing of brain atrophy.

[0041] FIGURE 27 shows graphs that demonstrate the observed improvement (panels a and b) in sleep quality as measured by a reduction in sleep fragmentation, expressed as a higher frequency longer rest durations, over a 24 week period of exemplary gamma stimulation treatment for a first 12-week period of treatment (indicated by the line closest to the white arrow), and second 12-week period of treatment (indicated by the line furthest from the white arrow), in mild to moderate AD subjects. Panels c and d demonstrate the observed impact of the sham treatment on sleep quality as measured by a reduction in sleep fragmentation.

[0042] FIGURE 28 demonstrates power changes responsive to (1 hr) 40 Hz LED stimulus in an exemplary embodiment showing 40 Hz steady state oscillation and enhanced alpha power during and following stimulus, in a young healthy subject. Both panels illustrate the time-frequency domain decomposition of EEG activity recorded over the occipital pole (Oz, channel-64) before, during and after 40 Hz gamma stimulation. The start and stop of gamma stimulation are marked with STIM ON and STIM OFF boundaries in both panels. The upper panel illustrates enhanced 40 Hz power during stimulation indicating steady-state visually evoked potential (SSVEP). The lower panel shows alpha-power dynamics during eyes-open (EYO) and eyes-closed (EYC) conditions, and the enhanced alpha power both during eyes-open gamma stimulation, as well as following the one-hour 40 Hz gamma stimulation. [0043] FIGURE 29 provides illustrations of the composite global cognitive summary score as a function of average sleep fragmentation (panel A), and composite expression of genes enriched in aged microglia (panel B). The dotted lines show 95% confidence intervals of estimate.

[0044] FIGURE 30 provides an oscilloscope capture of the visual (upper signal) and audio (lower signal) signals of an exemplary non-invasive sensory stimulus with fs equal to 40 Hz, vd equal to 50%, VD equal to 50%; ft equal to 7,000 Hz, and AD equal to 0.57%. [0045] FIGURE 31 shows a schematic of some aspects and parameters characterizing stimulus audio and visual components of non-invasive stimulation as delivered respectively by Audio Stimulus Module (110; FIG. 33) and Visual Stimulus Module (120; FIG. 33) of Stimulus Delivery System (170; FIG. 33). Numbers and relative dimensions of elements in FIG. 31 are adjusted for presentation and may not represent those for actual embodiments. [0046] FIGURE 32 demonstrates an overview of enrollment, treatment, and control for an exemplary embodiment of non-invasive stimulation improving sleep quality in mild to moderate AD subjects. Treatment was delivered to two thirds of the subjects (12) using 40 Hz frequency audio, and one third of subjects (6, “control”) at an alternate frequency.

[0047] FIGURE 33 provides a block diagram of an exemplary stimulus delivery system and analysis and monitoring system, said analysis and monitoring system comprising modules specific to sleep-related monitoring and/or analysis.

[0048] FIGURE 34 provides actigraphy data from 24 hours of activity levels (gray bar;

1501, FIG. 37) over two days for a single example patient, centered around 12 AM (indicated by double-sided arrow) along with a median filtered curve (labeled with a dotted arrow; 1507, FIG. 37). The horizontal axis of FIG. 34 shows time of day, and the vertical axis is relative activity recorded on a wrist-worn actigraphic measuring device (arbitrary log scale). Calculated sleep periods (black horizontal lines; see 1508, FIG. 37) along with individual sample rest periods (yellow horizontal lines; see 1509, FIG. 37) are shown: with the top panel (a) showing an exemplary pattern for frequent movements and short rest periods during sleep periods, and the bottom panel (b) showing an exemplary pattern of less frequent movements and longer rest periods during sleep periods.

[0049] FIGURE 35 provides exemplary patterns of actigraphy (arbitrary units, see

FIG. 34) over several days showing actigraphy (gray; e.g ., 1501, FIG. 37), and a smooth curve is superposed. Cutoff line (black) separates active versus rest periods (e.g, 1505, FIG. 37). Black squares represent initial estimation for the mid-night point (e.g, 1507, FIG. 37). The final assessment of the mid-night points is determined through optimization algorithm (e.g, 1508, FIG. 37).

[0050] FIGURE 36 provides exemplary cumulative distribution of rest periods from a single patient (e.g., 1511, FIG. 37). Data from a first exemplary 12 weeks of treatment (solid line’s points, Week 0-12) and a second exemplary 12 weeks of treatment (dashed line’s points) is shown. In some embodiments the distribution is characterized by an exponential distribution (e.g, 1512, FIG. 37). In a further embodiment, an increase in the exponential decay constant represents an improvement in sleep quality (e.g, 1513, FIG. 37). In the present example, tau2 = 45 min, taui = 40 min, and tau diff = 5 min > 0.

[0051] FIGURE 37 provides a flowchart of exemplary analysis steps responsive to actigraphy data, provided in some embodiments at least in part by Actigraphy Monitoring Module 130 (FIG. 33). In some embodiments, analysis is directed at determining the cumulative distribution of rest periods for one or more subjects over a period of one or more nighttime sleep periods (1511). In some embodiments analysis is further directed at fitting an exponential distribution to the determined cumulative distribution (1512). In some embodiments, analysis is further directed at computing summary statistics or characteristic parameters for the fitted exponential distribution. In an exemplary embodiment, the exponential decay constant for the fitted exponential distribution is determined (1512; FIG. 36). In FIG. 37, terms in italics in braces refer to MATLAB (R2020a) APIs employed in the corresponding steps in an exemplary embodiment, e.g, “medfiltl” refers to 1-D median filtering. In some embodiments, alternate APIs, methods, or processes, with equivalent function are employed (e.g, Wolfram Language’s “ButterworthFilterModel” may be substituted for “butter”).

[0052] FIGURE 38 provides sample actigraphy recordings from a single patient, said sample actigraphy recording demonstrating the effect of gamma stimulation therapy on sleep through recordings taken five consecutive nights prior to treatment, and five consecutive nights following treatment. The dark gray, horizontal bars below the X axis indicate continuous activity periods, with the continuous activity periods appearing significantly higher in the actigraphy recordings taken prior to treatment than the actigraphy recordings taken following treatment.

[0053] FIGURE 39 provides a cumulative distribution of rest and active durations in nighttime based on data pooled from all participants. The black squares indicate active periods, and the gray squares indicate rest periods. Panel A of FIG. 39 shows the cumulative distribution using a log-linear scale, and Panel B of FIG. 39 shows the cumulative distribution using a log-log scale.

[0054] FIGURE 40 shows graphs comparing the relative change in active durations, with the Y-axis indicating change relative to Weeks 1-12 during Weeks 13-24. FIG. 40 demonstrates a reduction in duration of active periods for the treatment group and, consequently, a reduction in sleep fragmentation leading to increased sleep quality. In contrast, the opposite effect was seen with the sham group, which is represented by the line closest to the gray arrow. Panel A of FIG. 40 shows the relative change based on the duration of active periods, and Panel B of FIG. 40 shows the normalized nighttime active durations, calculated by dividing the duration of each active period by the duration of the matching entire nighttime period.

[0055] FIGURE 41 shows the effect of gamma stimulation therapy on maintenance of daytime activities, as assed by Activities of Dialy Living (ADCS-ADL) scope. The graph shows that changes in daytime activities significantly improved in the treatment group and declined in the sham group. The X-axis compares the period from Week 1-12 and the period from Week 13-24. The Y-axis demonstrates the change in ADCS-ADL score during Weeks 13-24 relative to Weeks 1-12.

[0056] FIGURE 42 provides a flow chart demonstrating the proposed relationship between Alzheimer’s disease and sleep dysfunction. This was adapted from Wang, C. and D. M. Holtzman (2020). "Bidirectional relationship between sleep and Alzheimer's disease: role of amyloid, tau, and other factors." Neuropsvchopharmacology 45(1): 104-120.

[0057] FIGURE 43 provides an exemplary embodiment of a hand-held controller for adjusting parameters of the stimulus delivered by an operably coupled stimulus apparatus. [0058] The features and advantages of the present solution will become more apparent from the detailed description set forth below when taken in conjunction with the drawings, in which like reference characters identify corresponding elements throughout. In the drawings, like reference numbers generally indicate like elements.

DETAILED DESCRIPTION OF THE INVENTION

[0059] Described herein are systems and methods for using non invasive stimulation to a human subject and/or producing gamma wave oscillations in the brain of a human subject may improve sleep quality and potentially prevent, mitigate, and/or treat dementia, in particular AD, along with other sleep-related benefits. In particular, the present disclosure uses noninvasive stimulation to generate sensory-evoked potentials in at least one region of the brain and, as a result, mediate symptoms of cognitive decline associated with sleep deprivation. The present disclosure achieves improvement in sleep quality, including reduced sleep fragmentation and increased nighttime restful periods as assessed from actigraphy data, through non-invasive, convenient, and easily tolerated treatment, in mild to moderate AD subjects, with applications to wider populations of users. Moreover, the present solution provides a method that can be easily administered in the home or other familiar setting by the patient or caregiver, thus avoiding transportation between home and clinical facility.

[0060] The present technological solution achieves the entrainment of gamma wave oscillations in the brain and/or reduction in sleep fragmentation through a variety of methods and systems, and includes aspects covering the monitoring and analysis of sleep quality, motivation and feedback to users and third parties, and specific stimulation parameters targeted at sleep improvement. The disclosure further achieves improved brain wave coherence, measured through increased power in alpha and other frequency bands and other methods for assessing functional connectivity, which are associated with cognitive function, brain health, and general wellbeing.

[0061] Sleep fragmentation is associated with increased expression of genes characteristic of aged microglia and the proportion of morphologically activated microglia, which are in turn correlated with, and may underlie, sleep-fragmentation associated cognitive deficits. Based on these and other clinical observations, reducing sleep fragmentation and/or improving sleep quality in MCI and AD patients can provide multiple benefits: better sleep will enhance patients’ daytime performance, including cognitive function, and reduce behavioral pathologies and daytime sleepiness. Furthermore, improved sleep quality as a result of reduced sleep fragmentation can also positively modify disease progression.

[0062] In some embodiments, the present disclosure delivers non-invasive stimulation directed at producing a reduction in sleep fragmentation during night-time sleep of mild to moderate AD patients. In some embodiments, the present disclosure further describes technologies directed at increasing the length of restful periods during sleep and/or reducing the frequency of awakenings during sleep. Reduction in sleep fragmentation has been successfully demonstrated in subjects receiving non-invasive gamma audio-visual stimulation, while in the subjects in a control group, using identical devices but receiving alternate frequency audio-visual stimulation showed a further deterioration in sleep quality indicated a progression in sleep fragmentation. As assessed by actigraphy recordings and analysis, data demonstrated longer restful periods in sleep, therefore reducing sleep fragmentation.

[0063] In some embodiments, the present disclosure delivers non-invasive stimulation directed at producing beneficial changes in actigraphy during night-time sleep of mild to moderate AD patients. In some embodiments, the present disclosure describes technologies for delivering non-invasive gamma stimulation directed at producing beneficial changes in actigraphy during sleep periods in mild to moderate AD patients. In some embodiments, the present disclosure is directed at producing changes in actigraphy during sleep in mild to moderate AD patients through the application of audio-visual gamma wave stimulation. In some embodiments, the present disclosure describes technologies directed at producing beneficial changes in sleep of mild to moderate AD patients through the application of audio visual gamma wave stimulation.

[0064] In some embodiments, the present disclosure describes technologies for delivering non-invasive gamma stimulation directed at producing beneficial changes in actigraphy during sleep periods in one or more of: subjects at risk of AD, subjects experiencing cognitive decline, subjects experiencing sleep disruption, subjects diagnosed with AD, subjects diagnosed with MCI, healthy subjects, subjects with sleep pathologies, and subjects with sleep disruptions. In some embodiments, beneficial changes in actigraphy includes reduction in sleep fragmentation. In some embodiments, beneficial changes in actigraphy includes one or more of: increases the frequency of restful periods during sleep periods and/or reduction in the frequency of sleep interruptions during sleep periods. In some embodiments, the present disclosure delivers non-invasive stimulation directed at producing a reduction in sleep fragmentation during night-time sleep of mild to moderate AD patients. In some embodiments, the present disclosure further describes technologies directed at increasing the length of restful periods during sleep and/or reducing the frequency of awakenings during sleep.

[0065] In some embodiments, technologies directed at producing beneficial changes in actigraphy are further directed at producing beneficial sleep-related health outcomes. In some embodiments beneficial sleep-related health outcomes include one or more of: clearance of brain waste products, mitigation of cognitive deficits, slowing or delay of AD progression, reduction of circadian rhythm disruptions, reduction of microglial aging and activation, reduction in cognitive impairment, reduction in depression symptoms, mitigation of appetite or eating disorders, reduction in agitation, reduction in apathy, reduction in psychosis symptoms (including delusions and hallucinations), reduction in aggression, reduction in behavioral and psychiatric symptoms of dementia, stabilizing and/or preventing the degradation of one or more measures of performance. In some embodiments, mitigated circadian rhythm disruptions include but are not limited to disruptions associated with: AD, MCI, ageing, eating disorders, irregular sleep wake rhythm disorder, depression, anxiety, stress.

[0066] In some embodiments, sleep, during sleep, or sleep periods may refer to nighttime periods of relative inactivity or periods of frequent rest. In some further embodiments, such periods of relative inactivity or frequent rest refer to those characterized patterns of actigraphy, including but not limited to patterns of actigraphy identified using the methods described in embodiments of the present technological solution. FIG. 32 provides an example of a pattern of actigraphy identified using the methods described herein. FIG. 32 shows twenty -four (24) hours of activity levels (gray; 1501, FIG. 37) over two days for a single example patient, centered around 12 AM (indicated by the thick, gray arrows) along with a median filtered curve (labelled by thin arrows; 1507, FIG. 37). The horizontal axis shows time of day; the vertical axis is relative activity recorded on a wrist-worn actigraphic measuring device (arbitrary log scale). Calculated sleep periods (black horizontal lines; see 1508, FIG. 37) along with individual sample rest periods (yellow horizontal lines; see 1509, FIG. 37) are shown: with (a) showing an exemplary pattern for frequent movements and short rest periods during sleep periods, and (b) showing an exemplary pattern of less frequent movements and longer rest periods during sleep periods. Similarly, FIG. 33 provides exemplary patterns of actigraphy (arbitrary units, see FIG. 34). FIG. 33 provides actigraphy data for over several days (gray; e.g ., 1501, FIG. 37), and a smooth curve is superposed. The cutoff line (black line) separates active vs rest periods (e.g, 1505, FIG. 37). The black squares represent initial estimation for the mid-night point (e.g, 1507, FIG. 37), of which a final assessment of the mid-night points will be determined through optimization algorithm e.g, 1508, FIG. 37).

Delivery Methods and Systems

[0067] The present disclosure provides a method directed at improving sleep quality

(2020a, 2020b) and/or evoking gamma wave oscillations in a subject, the method comprising non-invasively delivering a signal configured with stimulus program parameters directed at improving sleep quality and/or evoking gamma wave oscillations in a subject. In some embodiments, the present disclosure archives sleep quality improvement by enhancing coherence or power of gamma oscillations in at least one brain region of the subject. [0068] In some embodiments the non-invasive signal is delivered through one or more of: visual, auditory, tactile, olfactory stimulation, or bone conduction. In some embodiments combined audio-visual stimulation is delivered for an hour each day for a 3 to 6 month or longer period. In some embodiments, stimulation is delivered for two hours each day. In some embodiments, stimulation is delivered for multiple periods over the course of a day. In some embodiments combined audio-visual stimulation is delivered over an extended open-ended period of time. In some embodiments stimulus is delivered in periods of varying durations. In some embodiments stimulus is delivered responsive to opportunities to effectively deliver stimulus, such opportunities determined by one or more of: monitoring, analysis, user or care giver input, clinician input. In some embodiments, a first stimulus period is delivered through a first apparatus, and a second stimulus period is delivered through a second apparatus. In some embodiments, a first stimulus period and a second stimulus period are delivered through a single apparatus.

[0069] In some embodiments, the non-invasive signal is delivered at least in part through glasses, goggles, a mask, or other worn apparatus that provide visual stimulation. In some embodiments, the non-invasive signal evokes gamma wave oscillations to improve sleep.

[0070] In some embodiments, the non-invasive signal is delivered at least in part through one or more devices in the user’s environment, such as a speaker, lighting fixtures, bed attachment, wall mounted screen, or other household device. In a further embodiment, such devices are controlled by a further device, such as a phone, tablet, or home automation hub, configured to manage the delivery of the non-invasive signal through the one or more devices in the user’s environment. In some embodiments such devices may additionally include worn devices.

[0071] In some embodiments, the non-invasive signal is delivered at least in part through headphones that provide auditory stimulation. In some embodiments, the present disclosure evokes gamma wave oscillations to improve sleep through headphones that provide auditory stimulation.

[0072] In some embodiments, the non-invasive signal is delivered through a combination of visual and auditory stimulation. In some embodiments, the present disclosure evokes gamma wave oscillations to improve sleep through a combination of visual and auditory stimulation.

[0073] In some embodiments, the non-invasive signal is delivered through a pair of opaque or partially transparent glasses worn by the subject with illuminating elements on the interior providing a visual signal. In some embodiments, the non-invasive signal is delivered through headphones or earbuds worn by the subject providing an auditory signal. In some embodiments, combined visual and auditory signals are provided by such headphones and glasses worn together at the same time. In some embodiments visual and auditory signals are delivered separately by glasses or headphones worn at different times. An exemplary embodiment includes a pair of glasses, with LEDs on the interior of the glasses providing visual stimulation and headphones providing auditory stimulation.

[0074] In some embodiments, subjects control aspects of the stimulus signal directed at achieving one or more of: tolerance, comfort, effectiveness, reduction in fatigue, compliance, adherence. In some embodiments, subjects or third parties can pause, interrupt, or terminate delivery of stimulus. In an exemplary embodiment, subjects and/or third parties can adjust peak audio volume and/or visual intensity of stimulus within a predefined safe operating range using a hand-held controller operably coupled to a stimulation delivery apparatus.

[0075] In some embodiments, the non-invasive signal is delivered through vibrotactile stimulation via an article of clothing or body attachment suitable for wearing proximate to or during periods of sleep or rest. In some embodiments such body attachment may include a device providing treatment for a condition of a user during sleep, such as a CPAP device. In some embodiments non-invasive signals may be delivered through the user's nostrils.

[0076] In some embodiments, the non-invasive signal is administered at least in part by a device as specified in one or more of US Patents US 10307611 B2, US 10293177 B2, or US 10279192 B2.

[0077] In some embodiments, the present disclosure delivers the non-invasive signal through a sleep mask worn over open or closed eyes of a subject. In some embodiments, the present technological solution further provides visual stimulation through closed or partially closed eyelids. In some embodiments, a sleep mask is any device worn by the user proximate to sleep periods. In some embodiments, a sleep mask, may be used in contexts and at times unrelated to sleep periods.

[0078] In an exemplary embodiment, a sleep mask with built-in or Bluetooth-paired or other wireless technology paired or physically paired headphones or earbuds provides the capability for delivering visual stimulation, auditory stimulation, or a combination of the two. In a further exemplary embodiment visual stimulation is automatically provided when the mask is covering the eyes and auditory stimulation is only provided when headphones or earbuds are seated or worn. [0079] In some embodiments, stimulation is delivered by a device that can be worn throughout a subject’s sleep period (including but not limited to, for example, a sleep mask embodiment). In a further embodiment, stimulation can be delivered by the device responsive to a user’s detected sleep state and/or other information indicative of a user’s activity. In an exemplary embodiment, the device delivers stimulation only during periods of detected sleep interruptions, or specific sleep stages, including but not limited to resting before the first period of sleep and/or waking or leaving a sleep area during the night. In some embodiments, stimulation parameters are adjusted responsive to detected sleep state or other monitoring. In an exemplary embodiment, users are offered audio-only stimulation during nighttime periods of sleep interruption. In some embodiments sleep state is detected responsive to one or more of: EEG, information about the location or position of a subject, actigraphy.

[0080] In some embodiments, stimulation is delivered to more than one subject present in a space. In an exemplary embodiment, stimulation is delivered to more than one subject in a space through devices present in the space, such devices delivering the same stimulus to all present subjects, or customized stimulus to individual subjects, or a combination thereof.

Monitoring. Feedback and Motivation.

[0081] In some embodiments, the present disclosure provides for one or more of monitoring sleep quality and sleep related aspects; providing feedback to users and third parties relating to these aspects, and motivating users or third parties in the use of the stimulation device or other related activities or therapies. For example, TABLE 1 provides an exemplary testing and monitoring protocol. In TABLE 1, X indicates an office assessment, P indicates a phone assessment, and A indicates an in-home assessment. In some embodiments, an in-home assessment comprises an in-person assessment. In some embodiments, an in-home assessment comprises a video call or a phone call. In some embodiments, the present disclosure executes the exemplary protocol of FIG. 33 in assessing sleep-related conditions. In some embodiments, the present disclosure uses other measures of the effects of non- invasive stimulation. In some embodiments, for example, the present disclosure provides a system that assesses sleep-related conditions using the protocol provided in FIG. 32.

[0082] TABLE 1. Exemplary Testing and Monitoring Protocol. a Only if not done within previous 8 weeks. b Done by Cognito Team; visit window does not apply to allow for support as needed throughout the study. Additional Ad Hoc visits may occur; may be done by home visit, video call or telephone call as needed for adequate subject support and data collection.

1 Includes care partner interview.

2 7M Visit only conducted if subject did not continue on to the Extension Phase immediately after the 6M Visit.

Monitoring

[0083] In some embodiments, the present disclosure monitors sleep-related parameters, such as actigraphy, heart rate, heart rate variability (HRV), respiratory rate, wakings, time out of bed, ambient audio, ambient light levels, light levels reaching subjects eyes or eyelids, or temperature. In further embodiments, the disclosure provides for such monitoring in association or responsive to the delivery of gamma stimulation therapy.

[0084] Measurements of sleep quality may include one or more of: waking durations, time out of bed, motion, body position, eye motion, eyelid status, respiratory sounds, snoring, respiration, heart rate, HRV, respiratory rate, sleep fragmentation. Measurements of sleep quality may include environmental aspects associated with sleep quality including but not limited to one or more of: room noise, room temperature, air circulation, air chemistry, bed temperature, partner sleep attributes, room configuration. Measurements of sleep quality may include other aspects associated with sleep quality, including but not limited to one or more of: alertness tests or self reports, assessments, surveys, cognitive challenges, physical challenges, task performance, productivity, third party assessment, daily activity, sports performance, appetite, weight gain or loss, hormonal changes, medication use, or other aspects of user performance or well being known or likely to be correlated with sleep quality. Measurements of sleep quality may include measurements taken during sleep or at other times, as appropriate.

[0085] In some embodiments, measurements are taken of the user’s sleep environment and conditions. Such measures may include room temperature, carbon dioxide levels, air circulation, ambient noise, etc. Measures may also include information relating to other aspects of the user ( e.g stressful tasks or events, exercise, diet) likely to affect sleep quality. [0086] In some embodiments, monitoring may include measuring of a subject’s brain wave parameters, including but not limited to neural activity, gamma entrainment, power in specific frequency bands, attributes of resting quantitative EEG, sensory evoked potentials, steady-state oscillations and induced oscillations, changes in coherence, cross-frequency amplitude coupling, harmonics. In some embodiments, measurement of a subject’s brain wave parameters is performed by a module incorporated into a component of the stimulation delivery apparatus. In some embodiments, measurement of a subject’s brain wave parameters is performed by a module incorporated into a separate device. In some embodiments, gamma entrainment and/or entrainment at other frequencies is detected by one or more methods ( e.g ., FIG. 28) and systems described at least in part in US 10279192 B2 (e.g., as illustrated there in FIG. 39, by identifying a plurality of neurons in the brain of a subject oscillating at a specific frequency following or during the application of stimulus).

[0087] In some embodiments an entrainment score, responsive at least in part to, measurement of gamma entrainment, is computed. In some embodiments, measurements and computations directed at entrainment detection activities are performed according to a schedule (e.g, TABLE 1); in some embodiments, scheduling, timing, and/or other attributes of activities directed at entertainment detection is responsive to one or more of: user input, user state, third party input, third party state, observations of user state or environment.

[0088] In an exemplary embodiment, a sleep quality monitoring module implemented in an application running on a device — such as a mobile phone, a tablet, or a similarly- functioning device — aggregates such parameters from connected devices. In further embodiments such connected devices include the stimulation delivery device. In some embodiments, a sleep quality monitoring module is implemented on the stimulation delivery device.

[0089] In further embodiments these measurements are analyzed, possibly along with measures of sleep quality. In an exemplary embodiment, analysis of user aspects or context are used in combination with measures of sleep quality to identify periods where sleep quality may be affected by that context.

[0090] In some embodiments, measurements are taken during sleep; in some embodiments, measurements are taken at other times. In further embodiments, measurements taken at other times may be specifically scheduled to provide the most relevant information (e.g., HR V while resting on waking for sleep quality; alpha wave measurements both during and after stimulation, cognitive assessments during daytime periods of productive wakefulness, etc.).

[0091] In some embodiments, measurements of sleep quality related parameters may be taken passively; in some embodiments, users may be prompted or scheduled to provide information related to sleep quality (e.g, by completing an assessment task or donning a specific measurement apparatus). In some embodiments, third parties such as a user’s caregiver are prompted or scheduled to provide or facilitate the collecting of measurements. [0092] In some embodiments, the present disclosure provides for monitoring sleep interruptions. In an exemplary embodiment, sleep interruptions are detected using actigraphy, such actigraphy provided from one or more devices associated with the user, and either worn or in proximity to the user while sleeping. In a further exemplary embodiment, such actigraphy is provided by sensors incorporated into the stimulation delivery device (c.f. sleep mask) worn by the user throughout their sleep period. In an exemplary embodiment, actigraphy is monitored continuously with a worn actigraphy device, such as a watch with actigraphic measurement capability.

[0093] In some embodiments, actigraphic observations include measurement, observation, and/or logging of one or more of: acceleration, gravity, location, position, orientation. In some embodiments, measurements and/or observations are made of one or more body parts. In some embodiments, actigraphic measures are computed from actigraphic observations. In some embodiments, actigraphic measures are responsive to information observed, transmitted, or recorded regarding at least in part: environment, time of day, user self-reports, history, demographic information, diagnosis, device interactions, on-line activity, third party assessment.

[0094] Extensive clinical and preclinical scientific research have utilized sensory stimulation using steady state auditory and visual stimulation in combination with EEG to evaluate sensory function, brain network dynamics, and pathophysiological changes related to disease (Herrmann, C. S. (2001). "Human EEG responses to 1-100 Hz flicker: resonance phenomena in visual cortex and their potential correlation to cognitive phenomena." Exp Brain Res 137(3-4): 346-353; Vialatte, F. B., M. Maurice, J. Dauwels and A. Cichocki (2010). "Steady-state visually evoked potentials: focus on essential paradigms and future perspectives." Prog Neurobiol 90(4): 418-438; Tada, M., K. Kirihara, D. Koshiyama, M. Fujioka, K. Usui, T. Uka, M. Komatsu, N. Kunii, T. Araki and K. Kasai (2019). "Gamma- Band Auditory Steady-State Response as a Neurophysiological Marker for Excitation and Inhibition Balance: A Review for Understanding Schizophrenia and Other Neuropsychiatric Disorders." Clin EEG Neurosci: 1550059419868872; Richard, N., M. Nikolic, E. L. Mortensen, M. Osier, M. Lauritzen and K. Benedek (2020). "Steady-state visual evoked potential temporal dynamics reveal correlates of cognitive decline." Clin Neurophysiol 131(4): 836-846)). Recent findings showing frequency-specific therapeutic benefits of sensory-evoked brain gamma oscillation on multiple hallmarks of AD pathology in transgenic animals (Iaccarino, H. F., A. C. Singer, A. J. Martorell, A. Rudenko, F. Gao, T. Z. Gillingham, H. Mathys, J. Seo, O. Kritskiy, F. Abdurrob, C. Adaikkan, R. G. Canter, R. Rueda, E. N. Brown, E. S. Boyden and L. H. Tsai (2016). "Gamma frequency entrainment attenuates amyloid load and modifies microglia." Nature 540(7632): 230-235; Martorell, A. J., A. L. Paulson, H. J. Suk, F. Abdurrob, G. T. Drummond, W. Guan, J. Z. Young, D. N. Kim, O. Kritskiy, S. J. Barker, V. Mangena, S. M. Prince, E. N. Brown, K. Chung, E. S. Boyden, A. C. Singer and L. H. Tsai (2019). "Multi-sensory Gamma Stimulation Ameliorates Alzheimer's-Associated Pathology and Improves Cognition." Cell 177(2): 256-271 e222)) initiated clinical studies to evaluate potential benefit of chronic, repeated audio-visual sensory stimulation in MCI and mild to moderate AD patients. Results provided in the Examples of the present disclosure provide the first evidence that sensory-stimulation induced 40Hz gamma-band steady-state oscillation improves clinical symptoms in AD patients.

[0095] Sleep disorders in MCI and AD patients are well recognized, having a 35% to

60% prevalence of some form(s) of sleep abnormalities. Though early detection of sleep disorders is of a particular significance given the established link between sleep disfunction and AD pathology, detecting these pathological changes in patients are not obvious. The practicality of sleep questioners used in clinical practice for sleep disorders, such as the Pittsburgh sleep quality index or Athens insomnia scale provide limited value since patients frequently do not recognize sleep disturbances (Most, E. T, S. Aboudan, P. Scheltens and E. J. Van Someren (2012). "Discrepancy between subjective and objective sleep disturbances in early- and moderate-stage Alzheimer disease." Am J Geriatr Psychiatry 20(6): 460-467). Unquestionably, polysomnogram (PSG) studies are best suited to detect and monitor of sleep abnormalities and reveal changes in sleep architecture, however their application to MCI or AD patients is particularly challenging due to the patients’ poor cooperation. Monitoring sleep changes with PSG in MCI and AD patients in response to therapeutic intervention over a longer period of time is also impractical. Recently, sleep monitoring with actigraphy in AD patients has become prevalent since a strong correlation between PSG and actigraphy data in sleep and wake time has been established (Ancoli-Israel, S., B. W. Palmer, J. R. Cooke, J. Corey-Bloom, L. Fiorentino, L. Natarajan, L. Liu, L. Ayalon, F. He and J. S. Loredo (2008). "Cognitive effects of treating obstructive sleep apnea in Alzheimer's disease: a randomized controlled study." J Am Geriatr Soc 56(11): 2076-2081). Furthermore, patients tolerate wrist actigraphy devices and actigraphy data can be collected continually over several weeks. This is an important additional advantage when the onset of the treatment is uknown. In our study, actigraphy was used to monitor continuously the activity of patients over a 6-month period. Analysis of the current actigraphy data revealed identical nighttime rest/sleep-activity/awake dynamics to those which were based on PSG data analysis. This observation further validates the applicability of continuous monitoring of nighttime sleep-wake activity with actigraphy, and its suitability for monitoring AD patients.

[0096] In some embodiments, the present technological solution employs monitoring of brain wave parameters to determine stimulus parameters. In an exemplary embodiment, identification of a subject’s dominant primary alpha wave frequency is used at least in part to determine the frequency of stimulation applied to the subject. In an exemplary embodiment, a stimulation is applied at four times the subject’s dominant primary alpha wave frequency. In some embodiments, stimulation is applied at an integer multiple of the subject’s dominant primary alpha wave frequency. In some embodiments, a subject’s dominant primary alpha wave frequency may be determined at least in part on one or more of: observations or measurements of a subject's brain wave parameters, demographic information associated with a subject, historical information associated with a subject, profile information associated with a subject.

[0097] In some embodiments, the present technological solution employs monitoring of brain wave parameters to categorize a user’s risk of developing MCI or AD, to assess their MCI or AD progression, or to diagnose MCI or AD. In a further embodiment, such categorization is based, at least in part, on detected reductions in the amount of gamma brainwave activity.

[0098] In some embodiments, the present technological solution monitors one or more subjects in a space, such monitoring including one or more of: presence in the space, proximity to stimulation delivery devices, levels and values of stimulus parameters incident on each subject, activity and behaviors of the subject. In an exemplary embodiment, a subject's presence in a space is observed and recorded. In an exemplary embodiment, the audio or visual characteristics of delivered stimulus is observed and recorded at one or more of: various locations in the space, one or more subj ect’ s locations in the space, one or more subj ect’ s eyes, one or more subjects' ears. In some embodiments, logs of such monitoring are employed to construct a measure of each subject’s aggregate exposure to effective stimulus while in the space.

[0099] In some embodiments monitoring information is communicated to a system that contributes to the operation of an automated interaction with a user or third party. In an exemplary embodiment, monitoring information is communicated to a system operating a chat hot interacting with a user or caregiver. [00100] In some embodiments, monitored information includes or is responsive to analysis of monitored information. In some embodiments, monitored information includes or is responsive at least in part to sleep fragmentation analyses from actigraphy and/or comparisons of two or more sleep fragmentation analyses from actigraphy.

Feedback

[00101] In some embodiments, the present disclosure provides feedback to users and third parties relating to aspects of the user’s sleep quality. In further aspects the disclosure provides such feedback responsive to the delivery of gamma stimulation therapy, or responsive to monitoring or the analysis of monitoring. In some embodiments feedback includes feedback or information about the use of the stimulation device, with or without information about monitoring or analysis.

[00102] In some embodiments, feedback can include reports to the user or to a third party about aspects of the stimulation, including duration, parameters, schedule, etc.; in some embodiments feedback may include values or summaries of values of measurements or monitoring of sleep related parameters; in some embodiments feedback may include information about sleep quality improvement, including the frequency, duration, and distribution of rest periods. In some embodiments third parties may include caregivers, healthcare professionals, providers, insurers, or employers. In some embodiments feedback may be provided on the stimulation device, on a secondary device (such as a phone or tablet), or remotely ( e.g ., on a console or other device associated with a third party). In some embodiments, one or more of distributions, summary statistics of distributions, or characteristic parameters for fitted distributions, for one or more groups of one or more subjects and/or one or more time periods are compared. In an exemplary embodiment (e.g., FIG. 37), distributions for two groups of subjects are compared and/or distributions within groups over subsequent periods (e.g, 12 weeks) are compared. In some embodiments, distributions for a single patient over two distinct sequential periods (e.g, 12 weeks) are compared. In some embodiments, differences between exponential decay constants are computed as a measure of sleep quality difference between one or more subjects or time periods (e.g, 1513). In an exemplary embodiment, exponential decay constants, taui for a first period, and tau2 for a second period, are determined. In a further embodiment tau diff = tau2 - taui is computed. In a further embodiment, tau diff is employed as a measure of sleep quality improvement or decline, for example, tau diff > 0 is reported as sleep quality improvement and/or tau diff < 0 is reported as sleep quality decline (e.g, FIG. 36). In some embodiments, one or more steps 1501 through 1513 are performed by Actigraphy Monitoring Module 130

(FIG. 33)

[00103] In an exemplary embodiment, the user is presented on a personal device connected or paired with the stimulation device (such as a phone or tablet) with a summary of their use of the stimulation device (including one or more of duration of wearing, stimulation applied, parameters, used, etc.), or with a summary of the changes in their sleep quality, or a combination of these.

[00104] In an exemplary embodiment, a caregiver is presented on a web dashboard linked to one or more users of one or more stimulation devices, with summaries of the use of the stimulation device (including one or more of duration of wearing, stimulation applied, parameters, used, etc.), or with summaries of the changes in one or more users’ sleep quality, or a combination of these.

[00105] In some embodiments, monitoring of one or more subjects in a space is employed to provide guidance associated with one or more subjects regarding locations, positions, behaviors, or attitudes within the space. In an exemplary embodiment, subjects are provided with such guidance directed at improving, for one or more subjects, one or more of: the effectiveness of received stimulus, characteristics of received stimulus ( e.g ., light levels, volume, intensity, frequency, duration, variation, etc.). In some embodiments, such guidance is provided to third parties. In some embodiments, such guidance is provided to subjects. [00106] In some embodiments, feedback is communicated or presented to one or more stimulus recipients. In some embodiments, feedback is communicated or presented to a third party, including but not limited to a clinician, delivery facility staff, device operator, device manufacture, therapy component provider, caregiver, payor, provider, employer, family member, researcher, health agency.

[00107] In some embodiments, feedback communicated to third parties is modified, processed, filtered, selected, or presented to achieve one or more of: reduction in recipients stress or concerns regarding a stimulus recipient, improvement in outcomes of one or more stimulus recipient, reduction in costs associated with one or more stimulus recipient, compliance with regulations associated with stimulus delivery.

[00108] In some embodiments feedback is communicated or presented through a programmatic interaction with a user or third party responsive at least in part to monitored information. In an exemplary embodiment, feedback is communicated by a chatbot or chatbot component interacting with a user or caregiver. [00109] In some embodiments, feedback incorporates or consists of processed or unprocessed monitored information or analysis. In some embodiments, feedback incorporates and/or consists of and/or is responsive at least in part to sleep fragmentation analyses from actigraphy and/or comparisons of two or more sleep fragmentation analyses from actigraphy.

Motivation

[00110] In some embodiments, the present disclosure provides for motivating users or third parties in the use of the stimulation device or other related activities or therapies. In further embodiments the disclosure provides such motivation responsive to the delivery of gamma stimulation therapy, or responsive to monitoring or the analysis of monitoring. [00111] Motivation may include instructions on use (or links to instruction on use), reminders or notifications, calendar events, rewards, progress indicators, comparisons with targets or goals, or comparisons with other users or target populations of users or demographic groups.

[00112] In an exemplary embodiment, users are reminded when they go to bed or shortly before their usual bedtime of progress they have achieved by using the stimulation device shortly before bed in the past; such reminder appearing on one or more of a personal device (e.g, as a notification), the stimulation device (e.g., as a flashing light or audio tone), or other device (e.g, desktop calendar); the content and timing of such reminder further responsive to analysis of times and durations of device use associated with improved sleep quality.

[00113] In an exemplary embodiment, users are presented, on a personal device associated with the stimulation device or with the stimulation device’s user, with instructions, motivating rewards, prompts, or achievements encouraging their use of the device responsive to the user’s history of device uses that have resulted in sleep quality improvement.

[00114] In an exemplary embodiment, caregivers are presented, on a web dashboard or console, instructions or guidance on how to encourage one or more users to use their stimulation devices in context or using methods (e.g, schedules, techniques, environmental conditions, etc.) likely to result in sleep quality improvement. In further exemplary embodiments these ways are prioritized or selected at least in part based on monitoring or analysis of the use of those one or more users, or other users, associated with effective sleep quality improvement.

[00115] In some embodiments, motivation is communicated or presented through a programmatic interaction with a user or third party responsive at least in part to monitored information. In an exemplary embodiment, feedback is communicated by a chatbot or chatbot component interacting with a user or caregiver.

[00116] In some embodiments, motivation incorporates or consists of feedback. In some embodiments, motivation incorporates and/or consists of and/or is responsive at least in part to sleep fragmentation analyses from actigraphy and/or comparisons of two or more sleep fragmentation analyses from actigraphy.

Analysis

[00117] In some embodiments, beneficial changes in actigraphy are identified by computing statistical measures associated with the distribution of one or more of: sleep fragmentation, rest periods during sleep periods, sleep interruptions during sleep periods (FIG. 37). In an exemplary embodiment, such analysis may include generating a distribution of the durations of rest periods or other measures of sleep fragmentation; in further embodiments such analysis may include comparing these distributions over time, or responsive to varying treatment parameters or patterns of device usage.

[00118] In some embodiments, the present technological solution includes methods and systems directed at analyzing sleep fragmentation from actigraphy. In some embodiments such methods and systems include collecting and/or receiving actigraphy data for one or more devices associated with one or more subjects over one or more time periods (1501, FIG. 37; e.g ., gray in FIG. 34). In some embodiments, such methods and systems further include one or more of: bandpass filtering of at least a portion of such actigraphy data (1502, FIG. 37); extraction of amplitude of at least a portion of such actigraphy data acceleration at one more more reduced sampling frequencies (1503, FIG. 37). In some embodiments, such methods and systems further include determination of a distribution of estimated accelerations (1504, FIG. 37). In some embodiments, such methods and systems further include identification of one or more device specific cutoffs distinguishing active vs inactive times based at least in part on device characteristic of actigraphy values associated with device non-use (1505, FIG. 37; e.g. , black “cutoff’ in FIG. 35), and categorization of actigraphy data based on such distinguishing. In an exemplary embodiment, data points with actigraphy values above values associated with device non-use are assigned a score of 1 while all other data points are assigned a value of 0 (1506, FIG. 375). In some embodiments, such methods and systems include generation of a smoothed estimate of activity from actigraphy data (1507, FIG. 37; e.g. , green in FIG. 34). In some embodiments, such methods and systems further include determination of an initial estimated mid-night point for each night from a smooth estimate of activity (1507, FIG. 37; e.g, black dots in FIG. 35). In some embodiments, an initial estimated mid-night point corresponds to the minimum of a smooth estimate of activity over a period from 12:00 PM on consecutive days.

[00119] In some embodiments, the present solution further includes methods and systems directed at determining the temporal extent of one or more night time sleep periods (black emphasised periods in FIG. 34), such methods including: an optimization directed at determining an optimized mid-night time point and surrounding temporal window, including: assigning credit to distinguished inactive data points (e.g, those assigned 0 values) and penalty to distinguished active data points (e.g, those assigned 1 values) within an optimized temporal window around an optimized mid-night point, and assigning credit to distinguished active data points and penalty to distinguished inactive data points outside such optimized temporal window (1508, FIG. 37). In some embodiments, the present solution further includes methods and systems directed at identifying active and rest periods (e.g, gray bars in FIG. 34) within each nighttime sleep period (1509, FIG. 37). In an exemplary embodiment, periods with actigraphy values above values associated with device non-use are categorized as active periods while all other data points are categorized as rest periods (1506, FIG. 37). In an exemplary embodiment, rest periods are assigned a value of 1 and active periods are assigned a value of 0.

[00120] In some embodiments, the present solution further includes methods and systems directed at characterizing the distribution of identified rest periods, such methods including one or more of: gathering and/or accumulating rest periods from one or more nights or other time periods for one or more subjects or groups of subjects (1510, FIG. 37), determining the cumulative distribution of gathered rest periods (1511, FIG. 37), fitting a statistical distribution to the distribution of gathered rest periods. In some embodiments, such methods and systems further include (1512, FIG. 37): fitting an exponential distribution to the distribution of gathered rest periods, determining the exponential decay constant for a fitted exponential distribution. Some embodiments further include methods directed at determining and/or reporting and/or transmitting a value based at least in part on the determined exponential decay constant for an exponential distribution fit to the cumulative distribution of rest periods for one or more subjects over one or more days and/or other periods (1513, FIG. 37), and/or determining and/or reporting and/or transmitting a value based at least in part on a comparison between exponential decay constants for two or more exponential distributions fit to the cumulative distributions of rest periods for one or more subjects over one or more days and/or other periods (1513, FIG. 37). [00121] In some embodiments, one or more such determined exponential decay constants, comparisons of such constants, functions of such constants, or values responsive to such constants, are reported singly or multiply as a measure of sleep quality, sleep improvement, sleep progress, treatment effectiveness, treatment results, disease progression, and/or other metric of treatment success, failure, effectiveness, or outcome. In an exemplary embodiment, reports responsive to or incorporating a value based on the difference between exponential decay constants for different users or different time periods are reported to users or third parties.

[00122] In some embodiments, analysis of sleep fragmentation including characterizing the distribution of identified rest periods is used at least in part to confirm and/or assess and/or report one or more of: beneficial changes in actigraphy during sleep periods, increases the frequency of restful periods during sleep periods, reduction in the frequency of sleep interruptions during sleep periods, improvement and/or maintenance prevention of degradation of sleep-related health outcomes.

[00123] In some embodiments, analysis of sleep fragmentation including characterizing the distribution of identified rest periods is used at least in part to determine, adjust, modify, and/or select one or more of: stimulation parameters, stimulation modalities, opportunities for delivering stimulation, goals for reduction in sleep fragmentation, devices for use in stimulation, locations for use in stimulation, environmental adjustments associated with stimulation, user state adjustments associated with stimulation, activities for use in conjunction with stimulation, third party role in stimulation. In some embodiments, such use of analysis of sleep fragmentation to determine, adjust, modify, and/or select is used in conjunction with information responsive to one or more of: user and/or third party history, location, profile, preferences, diagnosis, task, activity, relationship, assessment, test results, feedback, observation, prognosis, reports, device use history, treatment history. In some embodiments, such use of analysis of sleep fragmentation to determine, adjust, modify, and/or select is used in conjunction with information responsive to one or more of: available stimulation devices, stimulation or other characteristics of available stimulation devices, audio environment information, visual environment information, user context information, third party context information. In an exemplary embodiment, devices and/or opportunities for stimulation and/or parameters associated with effective and/or improved and/or mitigated outcomes as assessed at least in part by patterns in analysis of sleep fragmentation, are presented and/or suggested to users and or third parties. [00124] In some embodiments measured or observed sleep-related parameters are analyzed either locally and/or on a server to compute measures of sleep quality.

[00125] In some embodiments, comparison of measures or analysis of sleep quality or sleep fragmentation over time may be used to characterize the progression or risk of sleep- associated disease, such as AD. In some embodiments measures of sleep quality computed by the analysis are used as a measure of AD disease progression, risk, or diagnosis. In an exemplary embodiment, detection of specific levels of sleep fragmentation as determined by the analysis, or changes to those levels over time, are used to identify patients at risk for or in the early stages of AD.

[00126] In some embodiments, measured sleep-related and other parameters are aggregated from multiple users to identify population or demographic patterns related to sleep improvement and associations between program parameters or other aspects of stimulation delivery. In some embodiments measured sleep-related and other parameters from a single user are used to identify user-specific patterns.

[00127] In some embodiments, identified patterns are used to inform one or more of: the selection of program parameters or values, treatment schedules, motivations, communications with users, caregivers, or healthcare providers, for one or more users or populations of users. [00128] In some embodiments, analysis or results of analysis may be reported to users, care givers, healthcare providers, or other third parties. In an exemplary embodiment, disease progression analysis associated with AD progression is reported to health care providers or caregivers.

Program parameters and parameter values

[00129] In some embodiments, stimulus program parameters are configured with a stimulus frequency ( e.g ., fs in FIG. 31) of approximately 35 Hz to approximately 45 Hz for both audio and visual signals. In some embodiments, audio and visual signals are offset relative to each other by a delay (e.g., td in FIG. 31). In exemplary embodiment audio and visual signals are synchronized (td = 0 s).

[00130] In some embodiments, stimulus program parameters are configured with a variety of timing and intensity parameters. In an exemplary embodiment, these parameters include those illustrated in FIG. 31. In some embodiments, these parameters are preconfigured; in some embodiments they are adjusted at least in part by a third party such as a caregiver or healthcare provider; in some embodiments one or more parameters are adjusted responsive to measurements or analysis of one or more of: user context, measured sleep quality related parameters associated with the user, observed or detected use of the stimulation device. In some embodiments, stimulus parameters are adjusted responsive to detected or analysed sleep-related AD symptom progression.

[00131] In some embodiments, the present disclosure evokes gamma wave oscillations through a variety of frequency and intensity parameters.

[00132] In some embodiments, non-invasive stimulation includes one or more of: non- invasive sensory stimulation, non-invasive gamma stimulation, non-invasive gamma sensory stimulation, gamma stimulation therapy, non-invasive gamma stimulation therapy. In some embodiments, non-invasive stimulation is delivered as non-invasive therapy.

[00133] In an exemplary embodiment, subjects receive one hour a day non-invasive sensory gamma stimulation therapy. In some embodiments, subjects 29evive two hours of non-invasive sensory stimulation twice a day. In some embodiments, subjects receive multiple periods of non-invasive stimulation of varying durations and totals over the course of a day. In some embodiments, the timing, distribution of durations, and/or total duration throughout the day are responsive to one or more of: delivered stimulus values, environmental values, observed user state, observed or inferred effectiveness. In an exemplary embodiment, a subject is delivered brief periods of stimulus throughout a day, at times determined to be suitable for effective stimulus delivery, with a total of all periods responsive at least in part to a cumulative measure of stimulus effectiveness. In some embodiments, stimulus effectiveness is responsive to an entrainment score.

[00134] In some embodiments, one or more stimulus parameters or other aspects are responsive at least in part to sleep fragmentation analyses from actigraphy and/or comparisons of two or more sleep fragmentation analyses from actigraphy. In an exemplary embodiment, varying combinations stimulus parameters are used during different time periods and subsequent stimulation parameters are selected at least in part based on comparison of sleep fragmentation analyses from actigraphy among at least some of those periods. In some embodiments, stimulation parameters are selected to optimize, improve, and/or enhance sleep improvement as assessed at least in part by sleep fragmentation analyses from actigraphy. [00135] In some embodiments, the present disclosure delivers 40 Hz non-invasive audio, visual, or combined audio-visual stimulation. In some embodiments stimulus is delivered at one or more stimulation frequencies ( e.g ., fs in FIG. 31) in the approximate range of 35-45 Hz. In some embodiments, “gamma” refers to frequencies in the range 35-45 Hz. In some embodiments stimulus is delivered based at least in part on a user’s detected, reported, or demographically or individually associated or dominant alpha wave frequency. [00136] In some embodiments, specific visual parameters include one or more of: stimulation frequency, intensity (brightness), hue, visual patterns, spatial frequency, contrast, and duty-cycle. In an exemplary embodiment, visual stimulation is provided at a stimulation frequency of 40 Hz, brightness between 0 pW/cm2 to 1120 pW/cm2, and 50% visual signal duty-cycle.

[00137] In some embodiments, non-invasive stimulation is delivered as combined visual and auditory stimulation, delivered at 40 Hz frequency. In some embodiments, visual and auditory stimulation is synchronized to begin each cycle simultaneously. In some embodiments, the beginning of each auditory and visual stimulation cycle is offset by a configured time. In some embodiments, visual and auditory signals are delivered at an intensity clearly recognized by subjects and adjusted to their tolerance level.

[00138] In some embodiments, at least some of the parameters or characteristics of the non-invasive signal administered to a subject correspond to those specified in one or more of US Patents US 10307611 B2, US 10293177 B2, or US 10279192 B2. In some embodiments, at least some of the parameters or characteristics of the non-invasive signal administered to a subject correspond to those specified in one or more of US Patents US 10159816 B2 or US 10265497 B2.

[00139] In some embodiments specific audio parameters include one or more of: stimulation frequency, intensity (volume), and duty-cycle. In some embodiments, audio frequency is adjusted responsive to a subject’s hearing characteristics, for example to frequencies that a subject is better at hearing. In an exemplary embodiment, audio stimulation is provided at an audio tone frequency of 7,000 Hz, volume level between 0 dBA to 80 dBA, and 0.57% audio signal duty-cycle.

[00140] In some embodiments, non-invasive stimulation parameters are selected directed at evoking gamma wave oscillations in the brains of human subjects. In some embodiments, non-invasive stimulation parameters are selected directed at inducing alpha waves in human subjects (FIG. 40). In some embodiments, the non-invasive stimulation parameters are directed at inducing beta waves in human subjects. In some embodiments, the non-invasive stimulation parameters are directed at inducing beta waves in human subjects. In some embodiments, the non-invasive stimulation parameters are directed at inducing gamma waves in human subjects.

[00141] In some embodiments light levels and hue are adjusted to avoid fatiguing the subject. In some embodiments light levels and hue are adjusted to provide motivation to the subject. In some embodiments, parameters to each ear or eye are adjusted in a similar manner. In some embodiments, parameters to each ear or eye are adjusted differently. In an exemplary embodiment, audio and visual parameters such as tone and hue are varied to provide engagement or motivation to the subject to continue applying the stimulus or monitoring.

Neural Stimulation via Visual Stimulation

[00142] In some embodiments, systems and methods of the present disclosure are directed to controlling frequencies of neural oscillations using visual signals and, in doing so, cuasing an improvement in sleep quality. The visual stimulation can adjust, control or otherwise affect the frequency of the neural oscillations to provide beneficial effects to one or more cognitive states or cognitive functions of the brain, or the immune system, while mitigating or preventing adverse consequences on a cognitive state or cognitive function. The visual stimulation can, for example, provide beneficial improvements in sleep quality experienced by a user. The visual stimulation can result in brainwave entrainment that can provide beneficial effects to one or more cognitive states of the brain, cognitive functions of the brain, the immune system, or inflammation. In some cases, the visual stimulation can result in local effect, such as in the visual cortex and associate regions. In some vases, the visual stimulation can result in a more expansive effect and cause alterations in physiology in more than just the nervous system. The brainwave entrainment can, for example, treat sleep abnormalities. Sleep abnormalities, such as sleep fragmentation, have multiple impacts on human physiology, including dysfunction not only in the nervous system, but also impairing body metabolism or immune defense system. The brainwave entrainment can treat disorders, maladies, diseases, inefficiencies, injuries or other issues related to a cognitive function of the brain, cognitive state of the brain, the immune system, or inflammation.

[00143] Neural oscillation occurs in humans or animals and includes rhythmic or repetitive neural activity in the central nervous system. Neural tissue can generate oscillatory activity by mechanisms within individual neurons or by interactions between neurons. Oscillations can appear as either oscillations in membrane potential or as rhythmic patterns of action potentials, which can produce oscillatory activation of post-synaptic neurons. Synchronized activity of a group of neurons can give rise to macroscopic oscillations, which, for example, can be observed by electroencephalography (“EEG”), magnetoencephalography (“MEG”), functional magnetic resonance imaging (“fMRI”), or electrocorticography (“ECoG”). Neural oscillations can be characterized by their frequency, amplitude and phase. These signal properties can be observed from neural recordings using time-frequency analysis. [00144] For example, an EEG can measure oscillatory activity among a group of neurons, and the measured oscillatory activity can be categorized into frequency bands as follows: delta activity corresponds to a frequency band from 1-4 Hz; theta activity corresponds to a frequency band from 4-8 Hz; alpha activity corresponds to a frequency band from 8-12 Hz; beta activity corresponds to a frequency band from 13-30 Hz; and gamma activity corresponds to a frequency band from 30-70 Hz.

[00145] The frequency and presence or activity of neural oscillations can be associated with cognitive states or cognitive functions such as information transfer, perception, motor control and memory. Based on the cognitive state or cognitive function, the frequency of neural oscillations can vary. Further, certain frequencies of neural oscillations can have beneficial effects or adverse consequences on one or more cognitive states or function. However, it may be challenging to synchronize neural oscillations using external stimulus to provide such beneficial effects or reduce or prevent such adverse consequences.

[00146] Brainwave entrainment ( e.g ., neural entrainment or brain entrainment) occurs when an external stimulation of a particular frequency is perceived by the brain and triggers neural activity in the brain that results in neurons oscillating at a frequency corresponding to the particular frequency of the external stimulation. Thus, brain entrainment can refer to synchronizing neural oscillations in the brain using external stimulation such that the neural oscillations occur at a frequency that corresponds to the particular frequency of the external stimulation.

[00147] Systems and methods of the present disclosure can provide external visual stimulation to achieve brain entrainment. For example, external signals, such as light pulses or high-contrast visual patterns, can be perceived by the brain. The brain, responsive to observing or perceiving the light pulses, can adjust, manage, or control the frequency of neural oscillations. The light pulses generated at a predetermined frequency and perceived by ocular means via a direct visual field or a peripheral visual field can trigger neural activity in the brain to induce brainwave entrainment. The frequency of neural oscillations can be affected at least in part by the frequency of light pulses. While high-level cognitive function may gate or interfere with some regions being entrained, the brain can react to the visual stimulation at the sensory cortices. Thus, systems and methods of the present disclosure can provide brainwave entrainment using external visual stimulus such as light pulses emitted at a predetermined frequency to synchronize electrical activity among groups of neurons based on the frequency of light pulses. The entrainment of one or more portion or regions of the brain can be observed based on the aggregate frequency of oscillations produced by the synchronous electrical activity in ensembles of cortical neurons. The frequency of the light pulses can cause or adjust this synchronous electrical activity in the ensembles of cortical neurons to oscillate at a frequency corresponding to the frequency of the light pulses.

[00148] FIG. 1 is a block diagram depicting a system to perform visual brain entrainment in accordance with an embodiment. The system 100 can include a neural stimulation system (“NSS”) 105. The NSS 105 can be referred to as visual NSS 105 or NSS 105. In brief overview, the NSS 105 can include, access, interface with, or otherwise communicate with one or more of a light generation module 110, light adjustment module 115, unwanted frequency filtering module 120, profile manager 125, side effects management module 130, feedback monitor 135, data repository 140, visual signaling component 150, filtering component 155, or feedback component 160. The light generation module 110, light adjustment module 115, unwanted frequency filtering module 120, profile manager 125, side effects management module 130, feedback monitor 135, visual signaling component 150, filtering component 155, or feedback component 160 can each include at least one processing unit or other logic device such as programmable logic array engine, or module configured to communicate with the database repository 150. The light generation module 110, light adjustment module 115, unwanted frequency filtering module 120, profile manager 125, side effects management module 130, feedback monitor 135, visual signaling component 150, filtering component 155, or feedback component 160 can be separate components, a single component, or part of the NSS 105. The system 100 and its components, such as the NSS 105, may include hardware elements, such as one or more processors, logic devices, or circuits. The system 100 and its components, such as the NSS 105, can include one or more hardware or interface component depicted in system 700 in FIGs. 7A and 7B. For example, a component of system 100 can include or execute on one or more processors 721, access storage 728 or memory 722, and communicate via network interface 718.

[00149] Still referring to FIG. 1, and in further detail, the NSS 105 can include at least one light generation module 110. The light generation module 110 can be designed and constructed to interface with a visual signaling component 150 to provide instructions or otherwise cause or facilitate the generation of a visual signal, such as a light pulse or flash of light, having one or more predetermined parameter. The light generation module 110 can include hardware or software to receive and process instructions or data packets from one or more module or component of the NSS 105. The light generation module 110 can generate instructions to cause the visual signaling component 150 to generate a visual signal. The light generation module 110 can control or enable the visual signaling component 150 to generate the visual signal having one or more predetermined parameters. [00150] The light generation module 110 can be communicatively coupled to the visual signaling component 150. The light generation module 110 can communicate with the visual signaling component 150 via a circuit, electrical wire, data port, network port, power wire, ground, electrical contacts or pins. The light generation module 110 can wirelessly communicate with the visual signaling component 150 using one or more wireless protocols such as BlueTooth, BlueTooth Low Energy, Zigbee, Z-Wave, IEEE 802.11, WIFI, 3G, 4G, LTE, near field communications (“NFC”), or other short, medium or long range communication protocols, etc. The light generation module 110 can include or access network interface 718 to communicate wirelessly or over a wire with the visual signaling component 150.

[00151] The light generation module 110 can interface, control, or otherwise manage various types of visual signaling components 150 in order to cause the visual signaling component 150 to generate, block, control, or otherwise provide the visual signal having one or more predetermined parameters. The light generation module 110 can include a driver configured to drive a light source of the visual signaling component 150. For example, the light source can include a light emitting diode (“LED”), and the light generation module 110 can include an LED driver, chip, microcontroller, operational amplifiers, transistors, resistors, or diodes configured to drive the LED light source by providing electricity or power having certain voltage and current characteristics.

[00152] In some embodiments, the light generation module 110 can instruct the visual signaling component 150 to provide a visual signal that include a light wave 200 as depicted in FIG. 2A. The light wave 200 can include or be formed of electromagnetic waves. The electromagnetic waves of the light wave can have respective amplitudes and travel orthogonal to one another as depicted by the amplitude of the electric field 205 versus time and the amplitude of the magnetic field 210 versus time. The light wave 200 can have a wavelength 215. The light wave can also have a frequency. The product of the wavelength 215 and the frequency can be the speed of the light wave. For example, the speed of the light wave can be approximately 299,792,458 meters per second in a vacuum.

[00153] The light generation module 110 can instruct the visual signaling component 150 to generate light waves having one or more predetermined wavelength or intensity. The wavelength of the light wave can correspond to the visible spectrum, ultraviolet spectrum, infrared spectrum, or some other wavelength of light. For example, the wavelength of the light wave within the visible spectrum range can range from 390 to 700 nanometers (“nm”). Within the visible spectrum, the light generation module 110 can further specify one or more wavelengths corresponding to one or more colors. For example, the light generation module 110 can instruct the visual signaling component 150 to generate visual signals comprising one or more light waves having one or more wavelength corresponding to one or more of ultra violet ( e.g ., 10-380 nm); violet ( e.g ., 380-450 nm), blue ( e.g ., 450-495 nm), green ( e.g ., 495- 570 nm), yellow (e.g., 570-590 nm), orange (e.g., 590-620 nm), red (e.g., 620-750 nm); or infrared (e.g., 750 -1000000 nm). The wavelength can range from 10 nm to 100 micrometers. In some embodiments, the wavelength can be in the range of 380 to 750 nm.

[00154] The light generation module 110 can determine to provide visual signals that include light pulses. The light generation module 110 can instruct or otherwise cause the visual signaling component 150 to generate light pulses. A light pulse can refer to a burst of light waves. For example, FIG. 2B illustrates a burst of a light wave. The burst of light wave can refer to a burst of an electric field 250 generated by the light wave. The burst of the electric field 250 of the light wave can be referred to as a light pulse or a flash of light. For example, a light source that is intermittently turned on and off can create bursts, flashes or pulses of light.

[00155] FIG. 2C illustrates pulses of light 235a-c in accordance with an embodiment. The light pulses 235a-c can be illustrated via a graph in the frequency spectrum where the y- axis represent frequency of the light wave (e.g, the speed of the light wave divided by the wavelength) and the x-axis represents time. The visual signal can include modulations of light wave between a frequency of F a and frequency different from F a . For example, the NSS 105 can modulate a light wave between a frequency in the visible spectrum, such as Fa, and a frequency outside the visible spectrum. The NSS 105 can modulate the light wave between two or more frequencies, between an on state and an off state, or between a high power state and a low power state.

[00156] In some cases, the frequency of the light wave used to generate the light pulse can be constant at F a , thereby generating a square wave in the frequency spectrum. In some embodiments, each of the three pulses 235a-c can include light waves having a same frequency F a .

[00157] The width of each of the light pulses (e.g, the duration of the burst of the light wave) can correspond to a pulse width 230a. The pulse width 230a can refer to the length or duration of the burst. The pulse width 230a can be measured in units of time or distance. In some embodiments, the pulses 235a-c can include lights waves having different frequencies from one another. In some embodiments, the pulses 235a-c can have different pulse widths 230a from one another, as illustrated in FIG. 2D. For example, a first pulse 235d of FIG. 2D can have a pulse width 230a, while a second pulse 235e has a second pulse width 230b that is greater than the first pulse width 230a. A third pulse 235f can have a third pulse width 230c that is less than the second pulse width 230b. The third pulse width 230c can also be less than the first pulse width 230a. While the pulse widths 230a-c of the pulses 235d-f of the pulse train may vary, the light generation module 110 can maintain a constant pulse rate interval 240 for the pulse train.

[00158] The pulses 235a-c can form a pulse train having a pulse rate interval 240. The pulse rate interval 240 can be quantified using units of time. The pulse rate interval 240 can be based on a frequency of the pulses of the pulse train 201. The frequency of the pulses of the pulse train 201 can be referred to as a modulation frequency. For example, the light generation module 110 can provide a pulse train 201 with a predetermined frequency corresponding to gamma activity, such as 40 Hz. To do so, the light generation module 110 can determine the pulse rate interval 240 by taking the multiplicative inverse (or reciprocal) of the frequency ( e.g ., 1 divided by the predetermined frequency for the pulse train). For example, the light generation module 110 can take the multiplicative inverse of 40 Hz by dividing 1 by 40 Hz to determine the pulse rate interval 240 as .025 seconds. The pulse rate interval 240 can remain constant throughout the pulse train. In some embodiments, the pulse rate interval 240 can vary throughout the pulse train or from one pulse train to a subsequent pulse train. In some embodiments, the number of pulses transmitted during a second can be fixed, while the pulse rate interval 240 varies.

[00159] In some embodiments, the light generation module 110 can generate a light pulse having a light wave that varies in frequency. For example, the light generation module 110 can generate up-chirp pulses where the frequency of the light wave of the light pulse increases from the beginning of the pulse to the end of the pulse as illustrated in FIG. 2E. For example, the frequency of a light wave at the beginning of pulse 235g can be F a. The frequency of the light wave of the pulse 235g can increase from F a to F b in the middle of the pulse 235g, and then to a maximum of Fc at the end of the pulse 235g. Thus, the frequency of the light wave used to generate the pulse 235g can range from F a to F c. The frequency can increase linearly, exponentially, or based on some other rate or curve.

[00160] The light generation module 110 can generate down-chirp pulses, as illustrated in FIG. 2F, where the frequency of the light wave of the light pulse decreases from the beginning of the pulse to the end of the pulse. For example, the frequency of a light wave at the beginning of pulse 235j can be F d. The frequency of the light wave of the pulse 235j can decrease from Fd to Fe in the middle of the pulse 235j, and then to a minimum of Ff at the end of the pulse 235j. Thus, the frequency of the light wave used to generate the pulse 235j can range from F d to F f . The frequency can decrease linearly, exponentially, or based on some other rate or curve.

[00161] Visual signaling component 150 can be designed and constructed to generate the light pulses responsive to instructions from the light generation module 110. The instructions can include, for example, parameters of the light pulse such as a frequency or wavelength of the light wave, intensity, duration of the pulse, frequency of the pulse train, pulse rate interval, or duration of the pulse train ( e.g ., a number of pulses in the pulse train or the length of time to transmit a pulse train having a predetermined frequency). The light pulse can be perceived, observed, or otherwise identified by the brain via ocular means such as eyes. The light pulses can be transmitted to the eye via direct visual field or peripheral visual field. [00162] FIG. 3A illustrates a horizontal direct visual field 310 and a horizontal peripheral visual field. FIG. 3B illustrates a vertical direct visual field 320 and a vertical peripheral visual field 325. FIG. 3C illustrates degrees of direct visual fields and peripheral visual fields, including relative distances at which visual signals might be perceived in the different visual fields. The visual signaling component 150 can include a light source 305. The light source 305 can be positioned to transmit light pulses into the direct visual field 310 or 320 of a person’s eyes. The NSS 105 can be configured to transmit light pulses into the direct visual field 310 or 320 because this may facilitate brain entrainment as the person may pay more attention to the light pulses. The level of attention can be quantitatively measured directly in the brain, indirectly through the person’s eye behavior, or by active feedback (e.g., mouse tracking).

[00163] The light source 305 can be positioned to transmit light pulses into a peripheral visual field 315 or 325 of a person’s eyes. For example, the NSS 105 can transmit light pulses into the peripheral visual field 315 or 325 as these light pulses may be less distracting to the person who might be performing other tasks, such as reading, walking, driving, etc. Thus, the NSS 105 can provide subtle, on-going visual brain stimulation by transmitting light pulses via the peripheral visual field.

[00164] In some embodiments, the light source 305 can be head-worn, while in other embodiments the light source 305 can be held by a subject’s hands, placed on a stand, hung from a ceiling, or connected to a chair or otherwise positioned to direct light towards the direct or peripheral visual fields. For example, a chair or externally supported system can include or position the light source 305 to provide the visual input while maintaining a fixed/pre- specified relationship between the subject’s visual field and the visual stimulus. The system can provide an immersive experience. For example, the system can include an opaque or partially opaque dome that includes the light source. The dome can positioned over the subject’s head while the subject sits or reclines in chair. The dome can cover portions of the subject’s visual field, thereby reducing external distractions and facilitating entrainment of regions of the brain.

[00165] The light source 305 can include any type of light source or light emitting device. The light source can include a coherent light source, such as a laser. The light source 305 can include a light emitting diode (LED), Organic LED, fluorescent light source, incandescent light, or any other light emitting device. The light source can include a lamp, light bulb, or one or more light emitting diodes of various colors ( e.g ., white, red, green, blue). In some embodiments, the light source includes a semiconductor light emitting device, such as a light emitting diode of any spectral or wavelength range. In some embodiments, the light source 305 includes a broadband lamp or a broadband light source. In some embodiments, the light source includes a black light. In some embodiments, light source 305 includes a hollow cathode lamp, a fluorescent tube light source, a neon lamp, an argon lamp, a plasma lamp, a xenon flash lamp, a mercury lamp, a metal halide lamp, or a sulfur lamp. In some embodiments, the light source 305 includes a laser, or a laser diode. In some embodiments, light source 305 includes an OLED, PHOLED, QDLED, or any other variation of a light source utilizing an organic material. In some embodiments, light source 305 includes a monochromatic light source. In some embodiments, light source 305 includes a polychromatic light source. In some embodiments, the light source 305 includes a light source emitting light partially in the spectral range of ultraviolet light. In some embodiments, light source 305 includes a device, product or a material emitting light partially in the spectral range of visible light. In some embodiments, light source 305 is a device, product or a material partially emanating or emitting light in the spectral range of the infrared light. In some embodiments, light source 305 includes a device, product or a material emanating or emitting light in the visible spectral range. In some embodiments, light source 305 includes a light guide, an optical fiber or a waveguide through which light is emitted from the light source. [00166] In some embodiments, light source 305 includes one or more mirrors for reflecting or redirecting of light. For example, the mirrors can reflect or redirect light towards the direct visual field 310 or 320, or the peripheral visual field 315 or 325. The light source 305 can include interact with microelectromechanical devices (“MEMS”). The light source 305 can include or interact with a digital light projector (“DLP”). In some embodiments, the light source 305 can include ambient light or sunlight. The ambient light or sunlight can be focused by one or more optical lenses and directed towards the direct visual field or peripheral field. The ambient light or sunlight can be directed by one or more mirrors towards the directed visual field or peripheral visual field.

[00167] In cases where the light source is ambient light, the ambient light is not positioned but the ambient light can enter the eye via a direct visual field or peripheral visual field. In some embodiments, the light source 305 can be positioned to direct light pulses towards the direct visual field or peripheral field. For example, one or more light sources 305 can be attached, affixed, coupled, mechanically coupled, or otherwise provided with a frame 400 as illustrated in FIG. 4A. In some embodiments, the visual signaling component 150 can include the frame 400. Additional details of the operation of the NSS 105 in conjunction with the frame 400 including one or more light sources 305 are provided below, in the section labelled as “NSS Operating With A Frame”. Thus, the light source can include any type of light source such as an optical light source, mechanical light source, or chemical light source. The light source can include any material or object that is reflective or opaque that can generate, emit, or reflect oscillating patterns of light, such as a fan rotating in front of a light, or bubbles. In some embodiments, the light source can include optical illusions that are invisible, physiological phenomena that are within the eye ( e.g ., pressing the eyeball), or chemicals applied to the eye.

Systems and Devices Configured for Neural Stimulation via Visual Stimulation [00168] Referring now to FIG. 4A, the frame 400 can be designed and constructed to be placed or positioned on a person’s head. The frame 400 can be configured to be worn by the person. The frame 400 can be designed and constructed to stay in place. The frame 400 can be configured to be worn and stay in place as a person sits, stands, walks, runs, or lays down flat. The light source 305 can be configured on the frame 400 to project light pulses towards the person’s eyes during these various positions. In some embodiments, the light source 305 can be configured to project light pulses towards the person’s eyes if their eyelids are closed such that the light pulse penetrates the eyelid to be perceived by the retina. The frame 400 can include a bridge 420. The frame 400 can include one or more eye wires 415 coupled to the bridge 420. The bridge 420 can be positioned in between the eye wires 415. The frame 400 can include one or more temples extending from the one or more eye wires 415. In some embodiments, the eye wires 415 can include or hold a lens 425. In some embodiments, the eye wires 415 can include or hold a solid material 425 or cover 425. The lens, solid material, or cover 425 can be transparent, semi-transparent, opaque, or completely block out external light.

[00169] One or more light sources 305 can be positioned on or adjacent to the eye wire 415, lens or other solid material 425, or bridge 420. For example, a light source 305 can be positioned in the middle of the eye wire 415 on a solid material 425 in order to transmit light pulses into the direct visual field. In some embodiments, a light source 305 can be positioned at a comer of the eye wire 415, such as a comer of the eye wire 415 coupled to the temple 410, in order to transmit light pulses towards a peripheral field.

[00170] TheNSS 105 can perform visual brain entrainment via a single eye or both eyes. For example, the NSS 105 can direct light pulses to a single eye or both eyes. The NSS 105 can interface with a visual signaling component 150 that includes a frame 400 and two eye wires 415. However, the visual signaling component 150 may include a single light source 305 configured and positioned to direct light pulses to a first eye. The visual signaling component 150 can further include a light blocking component that keeps out or blocks the light pulses generated from the light source 305 from entering a second eye. The visual signaling component 150 can block or prevent light from entering the second eye during the brain entrainment process.

[00171] In some embodiments, the visual signaling component 150 can alternatively transmit or direct light pulses to the first eye and the second eye. For example, the visual signaling component 150 can direct light pulses to the first eye for a first time interval. The visual signaling component 150 can direct light pulses to the second eye for a second time interval. The first time interval and the second time interval can be a same time interval, overlapping time intervals, mutually exclusive time intervals, or subsequent time intervals. [00172] FIG. 4B illustrates a frame 400 comprising a set of shutters 435 that can block at least a portion of light that enters through the eye wire 415. The set of shutters 435 can intermittently block ambient light or sunlight that enters through the eye wire 415. The set of shutters 435 can open to allow light to enter through the eye wire 415, and close to at least partially block light that enters through the eye wire 415. Additional details of the operation of the NSS 105 in conjunction with the frame 400 including one or more shutters 430 are provided below, in the section labelled as “NSS Operating With A Frame”.

[00173] The set of shutters 435 can include one or more shutter 430 that is opened and closed by one or more actuator. The shutter 430 can be formed from one or more materials. The shutter 430 can include one or more materials. The shutter 430 can include or be formed from materials that are capable of at least partially blocking or attenuating light. [00174] The frame 400 can include one or more actuators configured to at least partially open or close the set of shutters 435 or an individual shutter 430. The frame 400 can include one or more types of actuators to open and close the shutters 435. For example, the actuator can include a mechanically driven actuator. The actuator can include a magnetically driven actuator. The actuator can include a pneumonic actuator. The actuator can include a hydraulic actuator. The actuator can include a piezoelectric actuator. The actuator can include a micro electromechanical systems (“MEMS”).

[00175] The set of shutters 435 can include one or more shutter 430 that is opened and closed via electrical or chemical techniques. For example, the shutter 430 or set of shutters 435 can be formed from one or more chemicals. The shutter 430 or set of shutters can include one or more chemicals. The shutter 430 or set of shutters 435 can include or be formed from chemicals that are capable of at least partially blocking or attenuating light.

[00176] For example, the shutter 430 or set of shutters 435 can include photochromic lenses configured to filter, attenuate or block light. The photochromic lenses can automatically darken when exposed to sunlight. The photochromic lens can include molecules that are configured to darken the lens. The molecules can be activated by light waves, such as ultraviolet radiation or other light wavelengths. Thus, the photochromic molecules can be configured to darken the lens in response to a predetermined wavelength of light.

[00177] The shutter 430 or set of shutters 435 can include electrochromic glass or plastic. Electrochromic glass or plastic can change from light to dark ( e.g ., clear to opaque) in response to an electrical voltage or current. Electrochromic glass or plastic can include metal-oxide coatings that are deposited on the glass or plastic, multiple layers, and lithium ions that travel between two electrodes between a layer to lighten or darken the glass.

[00178] The shutter 430 or set of shutters 435 can include micro shutters. Micro shutters can include tiny windows that measure 100 by 200 microns. The micro shutters can be arrayed in the eye frame 415 in a waffle-like grid. The individual micro shutters can be opened or closed by an actuator. The actuator can include a magnetic arm that sweeps past the micro shutter to open or close the micro shutter. An open micro shutter can allow light to enter through the eye frame 415, while a closed micro shutter can block, attenuate, or filter the light. [00179] The NSS 105 can drive the actuator to open and close one or more shutters 430 or the set of shutters 435 at a predetermined frequency such as 40 Hz. By opening and closing the shutter 430 at the predetermined frequency, the shutter 430 can allow flashes of light to pass through the eye wire 415 at the predetermined frequency. Thus, the frame 400 including a set of shutters 435 may not include or use separate light source coupled to the frame 400, such as a light source 305 coupled to frame 400 depicted in FIG. 4A.

[00180] In some embodiments, the visual signaling component 150 or light source 305 can refer to or be included in a virtual reality headset 401, as depicted in FIG. 4C. For example, the virtual reality headset 401 can be designed and constructed to receive a light source 305. The light source 305 can include a computing device having a display device, such as a smartphone or mobile telecommunications device. The virtual reality headset 401 can include a cover 440 that opens to receive the light source 305. The cover 440 can close to lock or hold the light source 305 in place. When closed, the cover 440 and case 450 and 445 can form an enclosure for the light source 305. This enclosure can provide an immersive experience that minimize or eliminates unwanted visual distractions. The virtual reality headset can provide an environment to maximize brainwave entrainment. The virtual reality headset can provide an augmented reality experience. In some embodiments, the light source 305 can form an image on another surface such that the image is reflected off the surface and towards a subject’s eye ( e.g ., a heads up display that overlays on the screen a flickering object or an augmented portion of reality). Additional details of the operation of the NSS 105 in conjunction with the virtual reality headset 401 are provided below, in the section labeled as “Systems And Devices Configured For Neural Stimulation Via Visual Stimulation”.

[00181] The virtual reality headset 401 includes straps 455 and 460 configured to secure the virtual reality headset 401 to a person’s head. The virtual reality headset 401 can be secured via straps 455 and 460 such to minimize movement of the headset 401 worn during physical activity, such as walking or running. The virtual reality headset 401 can include a skull cap formed from 460 or 455.

[00182] The feedback sensor 605 can include an electrode, dry electrode, gel electrode, saline soaked electrode, or adhesive-based electrodes.

[00183] FIGs. 5A-5D illustrate embodiments of the visual signaling component 150 that can include a tablet computing device 500 or other computing device 500 having a display screen 305 as the light source 305. The visual signaling component 150 can transmit light pulses, light flashes, or patterns of light via the display screen 305 or light source 305. [00184] FIGs. 5A illustrates a display screen 305 or light source 305 that transmits light. The light source 305 can transmit light comprising a wavelength in the visible spectrum. The NSS 105 can instruct the visual signaling component 150 to transmit light via the light source 305. The NSS 105 can instruct the visual signaling component 150 to transmit flashes of light or light pulses having a predetermined pulse rate interval. For example, FIG. 5B illustrates the light source 305 turned off or disabled such that the light source does not emit light, or emits a minimal or reduced amount of light. The visual signaling component 150 can cause the tablet computing device 500 to enable ( e.g ., FIG. 5A) and disable (e.g, FIG. 5B) the light source 305 such that flashes of light have a predetermined frequency, such as 40 Hz. The visual signaling component 150 can toggle or switch the light source 305 between two or more states to generate flashes of light or light pulses with the predetermined frequency.

[00185] In some embodiments, the light generation module 110 can instruct or cause the visual signaling component 150 to display a pattern of light via display device 305 or light source 305, as depicted in FIGs. 5C and 5D. The light generation module 110 can cause the visual signaling component 150 can flicker, toggle or switch between two or more patterns to generate flashes of light or light pulses. Patterns can include, for example, alternating checkerboard patterns 510 and 515. The pattern can include symbols, characters, or images that can be toggled or adjusted from one state to another state. For example, the color of a character or text relative to a background color can be inverted to cause a switch between a first state 510 and a second state 515. Inverting a foreground color and background color at a predetermined frequency can generate light pulses by way of indicating visual changes that can facilitate adjusting or managing a frequency of neural oscillations. Additional details of the operation of the NSS 105 in conjunction with the tablet 500 are provided below, in the section labeled as “NSS Operating With a Tablet”.

[00186] In some embodiments, the light generation module 110 can instruct or cause the visual signaling component 150 to flicker, toggle, or switch between images configured to stimulate specific or predetermined portions of the brain or a specific cortex. The presentation, form, color, motion and other aspects of the light or an image based stimuli can dictate which cortex or cortices are recruited to process the stimuli. The visual signaling component 150 can stimulate discrete portions of the cortex by modulating the presentation of the stimuli to target specific or general regions of interest. The relative position in the field of view, the color of the input, or the motion and speed of the light stimuli can dictate which region of the cortex is stimulated.

[00187] For example, the brain can include at least two portions that process predetermined types of visual stimuli: the primary visual cortex on the left side of the brain, and the calcarine fissure on the right side of the brain. Each of these two portions can have one or more multiple sub-portions that process predetermined types of visual stimuli. For example, the calcarine fissure can include a sub-portion referred to as area V5 that can include neurons that respond strongly to motion but may not register stationary objects. Subjects with damage to area V5 may have motion blindness, but otherwise normal vision. In another example, the primary visual cortex can include a sub-portion referred to as area V4 that can include neurons that are specialized for color perception. Subjects with damage to area V4 may have color blindness and only perceive objects in shades of gray. In another example, the primary visual cortex can include a sub-portion referred to as area V 1 that includes neurons that respond strongly to contrast edges and helps segment the image into separate objects. [00188] Thus, the light generation module 110 can instruct or cause the visual signaling component 150 to form a type of still image or video, or generate a flicker, or toggle between images that configured to stimulate specific or predetermined portions of the brain or a specific cortex. For example, the light generation module 110 can instruct or cause the visual signaling component 150 to generate images of human faces to stimulate a fusiform face area, which can facilitate brain entrainment for subjects having prosopagnosia or face blindness. The light generation module 110 can instruct or cause the visual signaling component 150 to generate images of faces flickering to target this area of the subject’s brain. In another example, the light generation module 110 can instruct the visual signaling component 150 to generate images that include edges or line drawings to stimulate neurons of the primary visual cortex that respond strongly to contrast edges.

[00189] The NSS 105 can include, access, interface with, or otherwise communicate with at least one light adjustment module 115. The light adjustment module 115 can be designed and constructed to measure or verify an environmental variable ( e.g ., light intensity, timing, incident light, ambient light, eye lid status, etc.) to adjust a parameter associated with the visual signal, such as a frequency, amplitude, wavelength, intensity pattern or other parameter of the visual signal. The light adjustment module 115 can automatically vary a parameter of the visual signal based on profile information or feedback. The light adjustment module 115 can receive the feedback information from the feedback monitor 135. The light adjustment module 115 can receive instructions or information from a side effects management module 130. The light adjustment module 115 can receive profile information from profile manager 125.

[00190] The NSS 105 can include, access, interface with, or otherwise communicate with at least one unwanted frequency filtering module 120. The unwanted frequency filtering module 120 can be designed and constructed to block, mitigate, reduce, or otherwise filter out frequencies of visual signals that are undesired to prevent or reduce an amount of such visual signals from being perceived by the brain. The unwanted frequency filtering module 120 can interface, instruct, control, or otherwise communicate with a filtering component 155 to cause the filtering component 155 to block, attenuate, or otherwise reduce the effect of the unwanted frequency on the neural oscillations.

[00191] The NSS 105 can include, access, interface with, or otherwise communicate with at least one profile manager 125. The profile manager 125 can be designed or constructed to store, update, retrieve or otherwise manage information associated with one or more subjects associated with the visual brain entrainment. Profile information can include, for example, historical treatment information, historical brain entrainment information, dosing information, parameters of light waves, feedback, physiological information, environmental information, or other data associated with the systems and methods of brain entrainment. [00192] The NSS 105 can include, access, interface with, or otherwise communicate with at least one side effects management module 130. The side effects management module 130 can be designed and constructed to provide information to the light adjustment module 115 or the light generation module 110 to change one or more parameter of the visual signal in order to reduce a side effect. Side effects can include, for example, nausea, migraines, fatigue, seizures, eye strain, or loss of sight.

[00193] The side effects management module 130 can automatically instruct a component of the NSS 105 to alter or change a parameter of the visual signal. The side effects management module 130 can be configured with predetermined thresholds to reduce side effects. For example, the side effects management module 130 can be configured with a maximum duration of a pulse train, maximum intensity of light waves, maximum amplitude, maximum duty cycle of a pulse train ( e.g ., the pulse width multiplied by the frequency of the pulse train), maximum number of treatments for brainwave entrainment in a time period (e.g., 1 hour, 2 hours, 12 hours, or 24 hours).

[00194] The side effects management module 130 can cause a change in the parameter of the visual signal in response to feedback information. The side effect management module 130 can receive feedback from the feedback monitor 135. The side effects management module 130 can determine to adjust a parameter of the visual signal based on the feedback. The side effects management module 130 can compare the feedback with a threshold to determine to adjust the parameter of the visual signal.

[00195] The side effects management module 130 can be configured with or include a policy engine that applies a policy or a rule to the current visual signal and feedback to determine an adjustment to the visual signal. For example, if feedback indicates that a patient receiving visual signals has a heart rate or pulse rate above a threshold, the side effects management module 130 can turn off the pulse train until the pulse rate stabilizes to a value below the threshold, or below a second threshold that is lower than the threshold.

[00196] The NSS 105 can include, access, interface with, or otherwise communicate with at least one feedback monitor 135. The feedback monitor can be designed and constructed to receive feedback information from a feedback component 160. Feedback component 160 can include, for example, a feedback sensor 605 such as a temperature sensor, heart or pulse rate monitor, physiological sensor, ambient light sensor, ambient temperature sensor, sleep status via actigraphy, blood pressure monitor, respiratory rate monitor, brain wave sensor, EEG probe, electrooculography (“EOG”) probes configured to measure the corneo-retinal standing potential that exists between the front and the back of the human eye, accelerometer, gyroscope, motion detector, proximity sensor, camera, microphone, or photo detector.

[00197] In some embodiments, a computing device 500 can include the feedback component 160 or feedback sensor 605, as depicted in FIGS. 5C and 5D. For example, the feedback sensor on tablet 500 can include a front-facing camera that can capture images of a person viewing the light source 305.

[00198] FIG. 6A depicts one or more feedback sensors 605 provided on a frame 400. In some embodiments, a frame 400 can include one or feedback sensors 605 provided on a portion of the frame, such as the bridge 420 or portion of the eye wire 415. The feedback sensor 605 can be provided with or coupled to the light source 305. The feedback sensor 605 can be separate from the light source 305.

[00199] The feedback sensor 605 can interact with or communicate with NSS 105. For example, the feedback sensor 605 can provide detected feedback information or data to the NSS 105 ( e.g ., feedback monitor 135). The feedback sensor 605 can provide data to the NSS 105 in real-time, for example as the feedback sensor 605 detects or senses or information. The feedback sensor 605 can provide the feedback information to the NSS 105 based on a time interval, such as 1 minute, 2 minutes, 5 minutes, 10 minutes, hourly, 2 hours, 4 hours, 12 hours, or 24 hours. The feedback sensor 605 can provide the feedback information to the NSS 105 responsive to a condition or event, such as a feedback measurement exceeding a threshold or falling below a threshold. The feedback sensor 605 can provide feedback information responsive to a change in a feedback parameter. In some embodiments, the NSS 105 can ping, query, or send a request to the feedback sensor 605 for information, and the feedback sensor 605 can provide the feedback information in response to the ping, request, or query. [00200] FIG. 6B illustrates feedback sensors 605 placed or positioned at, on, or near a person’s head. Feedback sensors 605 can include, for example, EEG probes that detect brain wave activity.

[00201] The feedback monitor 135 can detect, receive, obtain, or otherwise identify feedback information from the one or more feedback sensors 605. The feedback monitor 135 can provide the feedback information to one or more component of the NS S 105 for further processing or storage. For example, the profile manager 125 can update profile data structure 145 stored in data repository 140 with the feedback information. Profile manager 125 can associate the feedback information with an identifier of the patient or person undergoing the visual brain stimulation, as well as a time stamp and date stamp corresponding to receipt or detection of the feedback information.

[00202] The feedback monitor 135 can determine a level of attention. The level of attention can refer to the focus provided to the light pulses used for brain stimulation. The feedback monitor 135 can determine the level of attention using various hardware and software techniques. The feedback monitor 135 can assign a score to the level of attention e.g ., 1 to 10 with 1 being low attention and 10 being high attention, or vice versa, 1 to 100 with 1 being low attention and 100 being high attention, or vice versa, 0 to 1 with 0 being low attention and 1 being high attention, or vice versa), categorize the level of attention (e.g., low, medium, high), grade the attention (e.g, A, B, C, D, or F), or otherwise provide an indication of a level of attention.

[00203] In some cases, the feedback monitor 135 can track a person’s eye movement to identify a level of attention. The feedback monitor 135 can interface with a feedback component 160 that includes an eye-tracker. The feedback monitor 135 (e.g, via feedback component 160) can detect and record eye movement of the person and analyze the recorded eye movement to determine an attention span or level of attention. The feedback monitor 135 can measure eye gaze which can indicate or provide information related to covert attention. For example, the feedback monitor 135 (e.g, via feedback component 160) can be configured with electro-oculography (“EOG”) to measure the skin electric potential around the eye, which can indicate a direction the eye faces relative to the head. In some embodiments, the EOG can include a system or device to stabilize the head so it cannot move in order to determine the direction of the eye relative to the head. In some embodiments, the EOG can include or interface with a head tracker system to determine the position of the heads, and then determine the direction of the eye relative to the head. [00204] In some embodiments, the feedback monitor 135 and feedback component 160 can determine or track the direction of the eye or eye movement using video detection of the pupil or corneal reflection. For example, the feedback component 160 can include one or more camera or video camera. The feedback component 160 can include an infra-red source that sends light pulses towards the eyes. The light can be reflected by the eye. The feedback component 160 can detect the position of the reflection. The feedback component 160 can capture or record the position of the reflection. The feedback component 160 can perform image processing on the reflection to determine or compute the direction of the eye or gaze direction of the eye.

[00205] The feedback monitor 135 can compare the eye direction or movement to historical eye direction or movement of the same person, nominal eye movement, or other historical eye movement information to determine a level of attention. For example, if the eye is focused on the light pulses during the pulse train, then the feedback monitor 135 can determine that the level of attention is high. If the feedback monitor 135 determines that the eye moved away from the pulse train for 25% of the pulse train, then the feedback monitor 135 can determine that the level of attention is medium. If the feedback monitor 135 determines that the eye movement occurred for more than 50% of the pulse train or the eye was not focused on the pulse train for greater than 50%, then the feedback monitor 135 can determine that the level of attention is low.

[00206] In some embodiments, the system 100 can include a filter ( e.g ., filtering component 155) to control the spectral range of the light emitted from the light source. In some embodiments, light source includes a light reactive material affecting the light emitted, such as a polarizer, filter, prism or a photochromic material, or electrochromic glass or plastic. The filtering component 155 can receive instructions from the unwanted frequency filtering module 120 to block or attenuate one or more frequencies of light.

[00207] The filtering component 155 can include an optical filter that can selectively transmit light in a particular range of wavelengths or colors, while blocking one or more other ranges of wavelengths or colors. The optical filter can modify the magnitude or phase of the incoming light wave for a range of wavelengths. The optical filter can include an absorptive filter, or an interference or dichroic filter. An absorptive filter can take energy of a photon to transform the electromagnetic energy of a light wave into internal energy of the absorber (e.g., thermal energy). The reduction in intensity of a light wave propagating through a medium by absorption of a part of its photons can be referred to as attenuation. [00208] An interference filter or dichroic filter can include an optical filter that reflects one or more spectral bands of light, while transmitting other spectral bands of light. An interference filter or dichroic filter may have a nearly zero coefficient of absorption for one or more wavelengths. Interference filters can be high-pass, low-pass, bandpass, or band- rejection. An interference filter can include one or more thin layers of a dielectric material or metallic material having different refractive indices.

[00209] In an illustrative implementation, the NS S 105 can interface with a visual signaling component 150, a filtering component 155, and a feedback component 160. The visual signaling component 150 can include hardware or devices, such as glass frames 400 and one or more light sources 305. The filtering component 155 can include hardware or devices, such as a feedback sensor 605. The filtering component 155 can include hardware, materials or chemicals, such as a polarizing lens, shutters, electrochromic materials or photochromic materials.

Computing Environment

[00210] FIGs. 7A and 7B depict block diagrams of a computing device 700. As shown in FIGs. 7A and 7B, each computing device 700 includes a central processing unit 721, and a main memory unit 722. As shown in FIG. 7A, a computing device 700 can include a storage device 728, an installation device 716, a network interface 718, an I/O controller 723, display devices 724a-724n, a keyboard 726 and a pointing device 727, e.g., a mouse. The storage device 728 can include, without limitation, an operating system, software, and software of a neural stimulation system (“NSS”) 701. The NSS 701 can include or refer to one or more of NSS 105, NSS 905, or NSOS 1605. As shown in FIG. 7B, each computing device 700 can also include additional optional elements, e.g, a memory port 703, a bridge 770, one or more input/output devices 730a-730n (generally referred to using reference numeral 730), and a cache memory 740 in communication with the central processing unit 721.

[00211] The central processing unit 721 is any logic circuitry that responds to and processes instructions fetched from the main memory unit 722. In many embodiments, the central processing unit 721 is provided by a microprocessor unit, e.g. : those manufactured by Intel Corporation of Mountain View, California; those manufactured by Motorola Corporation of Schaumburg, Illinois; the ARM processor (from, e.g. , ARM Holdings and manufactured by ST, TI, ATMEL, etc.) and TEGRA system on a chip (SoC) manufactured by Nvidia of Santa Clara, California; the POWER7 processor, those manufactured by International Business Machines of White Plains, New York; or those manufactured by Advanced Micro Devices of Sunnyvale, California; or field programmable gate arrays (“FPGAs”) from Altera in San Jose, CA, Intel Corporation, Xlinix in San Jose, CA, or MicroSemi in Aliso Viejo, CA, etc. The computing device 700 can be based on any of these processors, or any other processor capable of operating as described herein. The central processing unit 721 can utilize instruction level parallelism, thread level parallelism, different levels of cache, and multi-core processors. A multi-core processor can include two or more processing units on a single computing component. Examples of multi-core processors include the AMD PHENOM IIX2, INTEL CORE i5 and INTEL CORE i7.

[00212] Main memory unit 722 can include one or more memory chips capable of storing data and allowing any storage location to be directly accessed by the microprocessor 721. Main memory unit 722 can be volatile and faster than storage 728 memory. Main memory units 722 can be Dynamic random access memory (DRAM) or any variants, including static random access memory (SRAM), Burst SRAM or SynchBurst SRAM (BSRAM), Fast Page Mode DRAM (FPM DRAM), Enhanced DRAM (EDRAM), Extended Data Output RAM (EDO RAM), Extended Data Output DRAM (EDO DRAM), Burst Extended Data Output DRAM (BEDO DRAM), Single Data Rate Synchronous DRAM (SDR SDRAM), Double Data Rate SDRAM (DDR SDRAM), Direct Rambus DRAM (DRDRAM), or Extreme Data Rate DRAM (XDR DRAM). In some embodiments, the main memory 722 or the storage 728 can be non-volatile; e.g ., non-volatile read access memory (NVRAM), flash memory non-volatile static RAM (nvSRAM), Ferroelectric RAM (FeRAM), Magnetoresistive RAM (MRAM), Phase-change memory (PRAM), conductive-bridging RAM (CBRAM), Silicon-Oxide-Nitride-Oxide-Silicon (SONOS), Resistive RAM (RRAM), Racetrack, Nano-RAM (NRAM), or Millipede memory. The main memory 722 can be based on any of the above described memory chips, or any other available memory chips capable of operating as described herein. In the embodiment shown in FIG. 7A, the processor 721 communicates with main memory 722 via a system bus 750 (described in more detail below). FIG. 7B depicts an embodiment of a computing device 700 in which the processor communicates directly with main memory 722 via a memory port 703. For example, in FIG. 7B the main memory 722 can be DRDRAM.

[00213] FIG. 7B depicts an embodiment in which the main processor 721 communicates directly with cache memory 740 via a secondary bus, sometimes referred to as a backside bus. In other embodiments, the main processor 721 communicates with cache memory 740 using the system bus 750. Cache memory 740 typically has a faster response time than main memory 722 and is typically provided by SRAM, BSRAM, or EDRAM. In the embodiment shown in FIG. 7B, the processor 721 communicates with various I/O devices 730 via a local system bus 750. Various buses can be used to connect the central processing unit 721 to any of the I/O devices 730, including a PCI bus, a PCI-X bus, or a PCI-Express bus, or a NuBus. For embodiments in which the EO device is a video display 724, the processor 721 can use an Advanced Graphics Port (AGP) to communicate with the display 724 or the EO controller 723 for the display 724. FIG. 7B depicts an embodiment of a computer 700 in which the main processor 721 communicates directly with EO device 730b or other processors 721 ’ via HYPERTRANSPORT, RAPIDIO, or INFINIBAND communications technology. FIG. 7B also depicts an embodiment in which local busses and direct communication are mixed: the processor 721 communicates with EO device 730a using a local interconnect bus while communicating with EO device 730b directly.

[00214] A wide variety of EO devices 730a-730n can be present in the computing device 700. Input devices can include keyboards, mice, trackpads, trackballs, touchpads, touch mice, multi-touch touchpads and touch mice, microphones (analog or MEMS), multi-array microphones, drawing tablets, cameras, single-lens reflex camera (SLR), digital SLR (DSLR), CMOS sensors, CCDs, accelerometers, inertial measurement units, infrared optical sensors, pressure sensors, magnetometer sensors, angular rate sensors, depth sensors, proximity sensors, ambient light sensors, gyroscopic sensors, or other sensors. Output devices can include video displays, graphical displays, speakers, headphones, inkjet printers, laser printers, and 3D printers.

[00215] Devices 730a-730n can include a combination of multiple input or output devices, including, e.g ., Microsoft KINECT, Nintendo Wiimote for the WII, Nintendo WII U GAMEPAD, or Apple IPHONE. Some devices 730a-730n allow gesture recognition inputs through combining some of the inputs and outputs. Some devices 730a-730n provides for facial recognition which can be utilized as an input for different purposes including authentication and other commands. Some devices 730a-730n provides for voice recognition and inputs, including, e.g., Microsoft KINECT, SIRI for IPHONE by Apple, Google Now or Google Voice Search.

[00216] Additional devices 730a-730n have both input and output capabilities, including, e.g, haptic feedback devices, touchscreen displays, or multi -touch displays. Touchscreen, multi-touch displays, touchpads, touch mice, or other touch sensing devices can use different technologies to sense touch, including, e.g, capacitive, surface capacitive, projected capacitive touch (PCT), in-cell capacitive, resistive, infrared, waveguide, dispersive signal touch (DST), in-cell optical, surface acoustic wave (SAW), bending wave touch (BWT), or force-based sensing technologies. Some multi-touch devices can allow two or more contact points with the surface, allowing advanced functionality including, e.g ., pinch, spread, rotate, scroll, or other gestures. Some touchscreen devices, including, e.g. , Microsoft PIXELSENSE or Multi-Touch Collaboration Wall, can have larger surfaces, such as on a table-top or on a wall, and can also interact with other electronic devices. Some EO devices 730a-730n, display devices 724a-724n or group of devices can be augmented reality devices. The EO devices can be controlled by an EO controller 721 as shown in FIG. 7A. The EO controller 721 can control one or more EO devices, such as, e.g. , a keyboard 126 and a pointing device 727, e.g. , a mouse or optical pen. Furthermore, an EO device can also provide storage and/or an installation medium 116 for the computing device 700. In still other embodiments, the computing device 700 can provide USB connections (not shown) to receive handheld USB storage devices. In further embodiments, an EO device 730 can be a bridge between the system bus 750 and an external communication bus, e.g, a USB bus, a SCSI bus, a FireWire bus, an Ethernet bus, a Gigabit Ethernet bus, a Fibre Channel bus, or a Thunderbolt bus. [00217] In some embodiments, display devices 724a-724n can be connected to EO controller 721. Display devices can include, e.g. , liquid crystal displays (LCD), thin film transistor LCD (TFT-LCD), blue phase LCD, electronic papers (e-ink) displays, flexile displays, light emitting diode displays (LED), digital light processing (DLP) displays, liquid crystal on silicon (LCOS) displays, organic light-emitting diode (OLED) displays, active- matrix organic light-emitting diode (AMOLED) displays, liquid crystal laser displays, time- multiplexed optical shutter (TMOS) displays, or 3D displays. Examples of 3D displays can use, e.g, stereoscopy, polarization filters, active shutters, or autostereoscopy. Display devices 724a-724n can also be a head-mounted display (HMD). In some embodiments, display devices 724a-724n or the corresponding EO controllers 723 can be controlled through or have hardware support for OPENGL or DIRECTX API or other graphics libraries.

[00218] In some embodiments, the computing device 700 can include or connect to multiple display devices 724a-724n, which each can be of the same or different type and/or form. As such, any of the EO devices 730a-730n and/or the EO controller 723 can include any type and/or form of suitable hardware, software, or combination of hardware and software to support, enable or provide for the connection and use of multiple display devices 724a- 724n by the computing device 700. For example, the computing device 700 can include any type and/or form of video adapter, video card, driver, and/or library to interface, communicate, connect or otherwise use the display devices 724a-724n. In one embodiment, a video adapter can include multiple connectors to interface to multiple display devices 724a-724n. In other embodiments, the computing device 700 can include multiple video adapters, with each video adapter connected to one or more of the display devices 724a-724n. In some embodiments, any portion of the operating system of the computing device 700 can be configured for using multiple displays 724a-724n. In other embodiments, one or more of the display devices 724a- 724n can be provided by one or more other computing devices 700a or 700b connected to the computing device 700, via the network 140. In some embodiments software can be designed and constructed to use another computer’s display device as a second display device 724a for the computing device 700. For example, in one embodiment, an Apple iPad can connect to a computing device 700 and use the display of the device 700 as an additional display screen that can be used as an extended desktop.

[00219] Referring again to FIG. 7A, the computing device 700 can comprise a storage device 728 (A ., one or more hard disk drives or redundant arrays of independent disks) for storing an operating system or other related software, and for storing application software programs such as any program related to the software for the NS S. Examples of storage device 728 include, e.g ., hard disk drive (HDD); optical drive including CD drive, DVD drive, or BLU-RAY drive; solid-state drive (SSD); USB flash drive; or any other device suitable for storing data. Some storage devices can include multiple volatile and non-volatile memories, including, e.g. , solid state hybrid drives that combine hard disks with solid state cache. Some storage device 728 can be non-volatile, mutable, or read-only. Some storage device 728 can be internal and connect to the computing device 700 via a bus 750. Some storage device 728 can be external and connect to the computing device 700 via a I/O device 730 that provides an external bus. Some storage device 728 can connect to the computing device 700 via the network interface 718 over a network, including, e.g., the Remote Disk for MACBOOK AIR by Apple. Some client devices 700 can not require a non-volatile storage device 728 and can be thin clients or zero clients 202. Some storage device 728 can also be used as an installation device 716, and can be suitable for installing software and programs. Additionally, the operating system and the software can be run from a bootable medium, for example, a bootable CD, e.g.,KNOPPIX, a bootable CD for GNU/Linux that is available as a GNU/Linux distribution from knoppix.net.

[00220] Computing device 700 can also install software or application from an application distribution platform. Examples of application distribution platforms include the App Store for iOS provided by Apple, Inc., the Mac App Store provided by Apple, Inc., GOOGLE PLAY for Android OS provided by Google Inc., Chrome Webstore for CHROME OS provided by Google Inc., and Amazon Appstore for Android OS and KINDLE FIRE provided by Amazon.com, Inc.

[00221] Furthermore, the computing device 700 can include a network interface 718 to interface to the network 140 through a variety of connections including, but not limited to, standard telephone lines LAN or WAN links ( e.g ., 802.11, Tl, T3, Gigabit Ethernet, Infmiband), broadband connections (e.g., ISDN, Frame Relay, ATM, Gigabit Ethernet, Ethernet-over- SONET, ADSL, VDSL, BPON, GPON, fiber optical including FiOS), wireless connections, or some combination of any or all of the above. Connections can be established using a variety of communication protocols (e.g, TCP/IP, Ethernet, ARCNET, SONET, SDH, Fiber Distributed Data Interface (FDDI), IEEE 802.11a/b/g/n/ac CDMA, GSM, WiMax and direct asynchronous connections). In one embodiment, the computing device 700 communicates with other computing devices 700’ via any type and/or form of gateway or tunneling protocol e.g, Secure Socket Layer (SSL) or Transport Layer Security (TLS), or the Citrix Gateway Protocol manufactured by Citrix Systems, Inc. of Ft. Lauderdale, Florida. The network interface 118 can comprise a built-in network adapter, network interface card, PCMCIA network card, EXPRESSCARD network card, card bus network adapter, wireless network adapter, USB network adapter, modem or any other device suitable for interfacing the computing device 700 to any type of network capable of communication and performing the operations described herein.

[00222] A computing device 700 of the sort depicted in FIG. 7A can operate under the control of an operating system, which controls scheduling of tasks and access to system resources. The computing device 700 can be running any operating system such as any of the versions of the MICROSOFT WINDOWS operating systems, the different releases of the Unix and Linux operating systems, any version of the MAC OS for Macintosh computers, any embedded operating system, any real-time operating system, any open source operating system, any proprietary operating system, any operating systems for mobile computing devices, or any other operating system capable of running on the computing device and performing the operations described herein. Typical operating systems include, but are not limited to: WINDOWS 7000, WINDOWS Server 2012, WINDOWS CE, WINDOWS Phone, WINDOWS XP, WINDOWS VISTA, and WINDOWS 7, WINDOWS RT, and WINDOWS 8 all of which are manufactured by Microsoft Corporation of Redmond, Washington; MAC OS and iOS, manufactured by Apple, Inc. of Cupertino, California; and Linux, a fireely- available operating system, e.g, Linux Mint distribution (“distro”) or Ubuntu, distributed by Canonical Ltd. of London, United Kingdom; or Unix or other Unix-like derivative operating systems; and Android, designed by Google, of Mountain View, California, among others. Some operating systems, including, e.g ., the CHROME OS by Google, can be used on zero clients or thin clients, including, e.g, CHROMEBOOKS.

[00223] The computer system 700 can be any workstation, telephone, desktop computer, laptop or notebook computer, netbook, ULTRABOOK, tablet, server, handheld computer, mobile telephone, smartphone or other portable telecommunications device, media playing device, a gaming system, mobile computing device, or any other type and/or form of computing, telecommunications or media device that is capable of communication. The computer system 700 has sufficient processor power and memory capacity to perform the operations described herein. In some embodiments, the computing device 700 can have different processors, operating systems, and input devices consistent with the device. The Samsung GALAXY smartphones, e.g, operate under the control of Android operating system developed by Google, Inc. GALAXY smartphones receive input via a touch interface. [00224] In some embodiments, the computing device 700 is a gaming system. For example, the computer system 700 can comprise a PLAYSTATION 3, or PERSONAL PLAYSTATION PORTABLE (PSP), or a PLAYSTATION VITA device manufactured by the Sony Corporation of Tokyo, Japan, a NINTENDO DS, NINTENDO 3DS, NINTENDO WII, or a NINTENDO WII U device manufactured by Nintendo Co., Ltd., of Kyoto, Japan, or an XBOX 360 device manufactured by the Microsoft Corporation of Redmond, Washington, or an OCULUS RIFT or OCULUS VR device manufactured BY OCULUS VR, LLC of Menlo Park, California.

[00225] In some embodiments, the computing device 700 is a digital audio player such as the Apple IPOD, IPOD Touch, and IPOD NANO lines of devices, manufactured by Apple Computer of Cupertino, California. Some digital audio players can have other functionality, including, e.g, a gaming system or any functionality made available by an application from a digital application distribution platform. For example, the IPOD Touch can access the Apple App Store. In some embodiments, the computing device 700 is a portable media player or digital audio player supporting file formats including, but not limited to, MP3, WAV, M4A/AAC, WMA Protected AAC, AIFF, Audible audiobook, Apple Lossless audio file formats and .mov, m4v, and .mp4 MPEG-4 (H.264/MPEG-4 AVC) video file formats. [00226] In some embodiments, the computing device 700 is a tablet e.g., the IPAD line of devices by Apple; GALAXY TAB family of devices by Samsung; or KINDLE FIRE, by Amazon.com, Inc. of Seattle, Washington. In other embodiments, the computing device 700 is an eBook reader, e.g, the KINDLE family of devices by Amazon.com, or NOOK family of devices by Barnes & Noble, Inc. of New York City, New York.

[00227] In some embodiments, the communications device 700 includes a combination of devices, e.g., a smartphone combined with a digital audio player or portable media player. For example, one of these embodiments is a smartphone, e.g., the IPHONE family of smartphones manufactured by Apple, Inc.; a Samsung GALAXY family of smartphones manufactured by Samsung, Inc.; or a Motorola DROD family of smartphones. In yet another embodiment, the communications device 700 is a laptop or desktop computer equipped with a web browser and a microphone and speaker system, e.g, a telephony headset. In these embodiments, the communications devices 700 are web-enabled and can receive and initiate phone calls. In some embodiments, a laptop or desktop computer is also equipped with a webcam or other video capture device that enables video chat and video call.

[00228] In some embodiments, the status of one or more machines 700 in the network are monitored, generally as part of network management. In one of these embodiments, the status of a machine can include an identification of load information (e.g, the number of processes on the machine, CPU and memory utilization), of port information (e.g, the number of available communication ports and the port addresses), or of session status (e.g, the duration and type of processes, and whether a process is active or idle). In another of these embodiments, this information can be identified by a plurality of metrics, and the plurality of metrics can be applied at least in part towards decisions in load distribution, network traffic management, and network failure recovery as well as any aspects of operations of the present solution described herein. Aspects of the operating environments and components described above will become apparent in the context of the systems and methods disclosed herein.

A Method for Neural Stimulation

[00229] In FIG. 8 is a flow diagram of a method of performing visual brain entrainment in accordance with an embodiment. The method 800 can be performed by one or more system, component, module or element depicted in FIGS. 1-7B, including, for example, a neural stimulation system (NSS). In brief overview, the NSS can identify a visual signal to provide at block 805. At block 810, the NSS can generate and transmit the identified visual signal. At 815 the NSS can receive or determine feedback associated with neural activity, physiological activity, environmental parameters, or device parameters. At 820 the NSS can manage, control, or adjust the visual signal based on the feedback. NSS Operating With A Frame

[00230] The NSS 105 can operate in conjunction with the frame 400 including a light source 305 as depicted in FIG. 4A. The NSS 105 can operate in conjunction with the frame 400 including a light source 30 and a feedback sensor 605 as depicted in FIG. 6A. The NSS 105 can operate in conjunction with the frame 400 including at least one shutter 430 as depicted in FIG. 4B. The NSS 105 can operate in conjunction with the frame 400 including at least one shutter 430 and a feedback sensor 605.

[00231] In operation, a user of the frame 400 can wear the frame 400 on their head such that eye wires 415 encircle or substantially encircle their eyes. In some cases, the user can provide an indication to the NSS 105 that the glass frames 400 have been worn and that the user is ready to undergo brainwave entrainment. The indication can include an instruction, command, selection, input, or other indication via an input/output interface, such as a keyboard 726, pointing device 727, or other I/O devices 730a-n. The indication can be a motion-based indication, visual indication, or voice-based indication. For example, the user can provide a voice command that indicates that the user is ready to undergo brainwave entrainment.

[00232] In some cases, the feedback sensor 605 can determine that the user is ready to undergo brainwave entrainment. The feedback sensor 605 can detect that the glass frames 400 have been placed on a user’s head. The NSS 105 can receive motion data, acceleration data, gyroscope data, temperature data, or capacitive touch data to determine that the frames 400 have been placed on the user’s head. The received data, such as motion data, can indicate that the frames 400 were picked up and placed on the user’s head. The temperature data can measure the temperature of or proximate to the frames 400, which can indicate that the frames are on the user’s head. In some cases, the feedback sensor 605 can perform eye tracking to determine a level of attention a user is paying to the light source 305 or feedback sensor 605. The NSS 105 can detect that the user is ready responsive to determining that the user is paying a high level of attention to the light source 305 or feedback sensor 605. For example, staring at, gazing or looking in the direction of the light source 305 or feedback sensor 605 can provide an indication that the user is ready to undergo brainwave entrainment.

[00233] Thus, the NSS 105 can detect or determine that the frames 400 have been worn and that the user is in a ready state, or the NSS 105 can receive an indication or confirmation from the user that the user has worn the frames 400 and the user is ready to undergo brainwave entrainment. Upon determining that the user is ready, the NSS 105 can initialize the brainwave entrainment process. In some embodiments, the NSS 105 can access a profile data structure 145. For example, a profile manager 125 can query the profile data structure 145 to determine one or more parameter for the external visual stimulation used for the brain entrainment process. Parameters can include, for example, a type of visual stimulation, an intensity of the visual stimulation, frequency of the visual stimulation, duration of the visual stimulation, or wavelength of the visual stimulation. The profile manager 125 can query the profile data structure 145 to obtain historical brain entrainment information, such as prior visual stimulation sessions. The profile manager 125 can perform a lookup in the profile data structure 145. The profile manager 125 can perform a look-up with a username, user identifier, location information, fingerprint, biometric identifier, retina scan, voice recognition and authentication, or other identifying technique.

[00234] The NSS 105 can determine a type of external visual stimulation based on the hardware 400. The NSS 105 can determine the type of external visual stimulation based on the type of light source 305 available. For example, if the light source 305 includes a monochromatic LED that generates light waves in the red spectrum, the NSS 105 can determine that the type of visual stimulation includes pulses of light transmitted by the light source. However, if the frames 400 do not include an active light source 305, but, instead, include one or more shutters 430, the NSS 105 can determine that the light source is sunlight or ambient light that is to be modulated as it enters the user’s eye via a plane formed by the eye wire 415.

[00235] In some embodiments, the NSS 105 can determine the type of external visual stimulation based on historical brainwave entrainment sessions. For example, the profile data structure 145 can be pre-configured with information about the type of visual signaling component 150.

[00236] The NSS 105 can determine, via the profile manager 125, a modulation frequency for the pulse train or the ambient light. For example, NSS 105 can determine, from the profile data structure 145, that the modulation frequency for the external visual stimulation should be set to 40 Hz. Depending on the type of visual stimulation, the profile data structure 145 can further indicate a pulse length, intensity, wavelength of the light wave forming the light pulse, or duration of the pulse train.

[00237] In some cases, the NSS 105 can determine or adjust one or more parameter of the external visual stimulation. For example, the NSS 105 ( e.g ., via feedback component 160 or feedback sensor 605) can determine a level or amount of ambient light. The NSS 105 (e.g., via light adjustment module 115 or side effects management module 130) can establish, initialize, set, or adjust the intensity or wavelength of the light pulse. For example, the NSS 105 can determine that there is a low level of ambient light. Due to the low level of ambient light, the user’s pupils may be dilated. The NSS 105 can determine, based on detecting a low level of ambient light, that the user’s pupils are likely dilated. In response to determining that the user’s pupils are likely dilated, the NSS 105 can set a low level of intensity for the pulse train. The NSS 105 can further use a light wave having a longer wavelength (e.g, red), which may reduce strain on the eyes.

[00238] In some embodiments, the NSS 105 can monitor (e.g., via feedback monitor 135 and feedback component 160) the level of ambient light throughout the brainwave entrainment process to automatically and periodically adjust the intensity or color of light pulses. For example, if the user began the brainwave entrainment process when there was a high level of ambient light, the NSS 105 can initially set a higher intensity level for the light pulses and use a color that includes light waves having lower wavelengths (e.g, blue). However, in some embodiments in which the ambient light level decreases throughout the brainwave entrainment process, the NSS 105 can automatically detect the decrease in ambient light and, in response to the detection, adjust or lower the intensity while increasing the wavelength of the light wave. The NSS 105 can adjust the light pulses to provide a high contrast ratio to facilitate brainwave entrainment.

[00239] In some embodiments, the NSS 105 (e.g, via feedback monitor 135 and feedback component 160) can monitor or measure physiological conditions to set or adjust a parameter of the light wave. For example, the NSS 105 can monitor or measure a level of pupil dilation to adjust or set a parameter of the light wave. In some embodiments, the NSS 105 can monitor or measure heart rate, pulse rate, blood pressure, body temperature, perspiration, or brain activity to set or adjust a parameter of the light wave.

[00240] In some embodiments, the NSS 105 can be preconfigured to initially transmit light pulses having a lowest setting for light wave intensity (e.g, low amplitude of the light wave or high wavelength of the light wave) and gradually increase the intensity (e.g, increase the amplitude of the light wave or decrease the wavelength of the light wave) while monitoring feedback until an optimal light intensity is reached. An optimal light intensity can refer to a highest intensity without adverse physiological side effects, such as blindness, seizures, heart attack, migraines, or other discomfort. The NSS 105 (e.g, via side effects management module 130) can monitor the physiological symptoms to identify the adverse side effects of the external visual stimulation, and adjust (e.g, via light adjustment module 115) the external visual stimulation accordingly to reduce or eliminate the adverse side effects. [00241] In some embodiments, the NSS 105 ( e.g ., via light adjustment module 115) can adjust a parameter of the light wave or light pulse based on a level of attention. For example, during the brainwave entrainment process, the user may get bored, lose focus, fall asleep, or otherwise not pay attention to the light pulses. Not paying attention to the light pulses may reduce the efficacy of the brainwave entrainment process, resulting in neurons oscillating at a frequency different from the desired modulation frequency of the light pulses.

[00242] NSS 105 can detect the level of attention the user is paying to the light pulses using the feedback monitor 135 and one or more feedback component 160. The NSS 105 can perform eye tracking to determine the level of attention the user is providing to the light pulses based on the gaze direction of the retina or pupil. The NSS 105 can measure eye movement to determine the level of attention the user is paying to the light pulses. The NSS 105 can provide a survey or prompt asking for user feedback that indicates the level of attention the user is paying to the light pulses. Responsive to determining that the user is not paying a satisfactory amount of attention to the light pulses (e.g., a level of eye movement that is greater than a threshold or a gaze direction that is outside the direct visual field of the light source 305), the light adjustment module 115 can change a parameter of the light source to gain the user’s attention. For example, the light adjustment module 115 can increase the intensity of the light pulse, adjust the color of the light pulse, or change the duration of the light pulse. The light adjustment module 115 can randomly vary one or more parameters of the light pulse. The light adjustment module 115 can initiate an attention seeking light sequence configured to regain the user’s attention. For example, the light sequence can include a change in color or intensity of the light pulses in a predetermined, random, or pseudo-random pattern. The attention seeking light sequence can enable or disable different light sources if the visual signaling component 150 includes multiple light sources. Thus, the light adjustment module 115 can interact with the feedback monitor 135 to determine a level of attention the user is providing to the light pulses, and adjust the light pulses to regain the user’s attention if the level of attention falls below a threshold.

[00243] In some embodiments, the light adjustment module 115 can change or adjust one or more parameter of the light pulse or light wave at predetermined time intervals (e.g, every 5 minutes, 10 minutes, 15 minutes, or 20 minutes) to regain or maintain the user’s attention level.

[00244] In some embodiments, the NSS 105 (e.g, via unwanted frequency filtering module 120) can filter, block, attenuate, or remove unwanted visual external stimulation. Unwanted visual external stimulation can include, for example, unwanted modulation frequencies, unwanted intensities, or unwanted wavelengths of light waves. The NS S 105 can deem a modulation frequency to be unwanted if the modulation frequency of a pulse train is different or substantially different (e.g, 1%, 2%, 5%, 10%, 15%, 20%, 25%, or more than 25%) from a desired frequency.

[00245] For example, the desired modulation frequency for brainwave entrainment can be 40 Hz. However, a modulation frequency of 20 Hz or 80 Hz can hinder brainwave entrainment. Thus, the NSS 105 can filter out the light pulses or light waves corresponding to the 20 Hz or 80 Hz modulation frequency.

[00246] In some embodiments, the NSS 105 can detect, via feedback component 160, that there are light pulses from an ambient light source that corresponds to an unwanted modulation frequency of 20 Hz. The NSS 105 can further determine the wavelength of the light waves of the light pulses corresponding to the unwanted modulation frequency. The NSS 105 can instruct the filtering component 155 to filter out the wavelength corresponding to the unwanted modulation frequency. For example, the wavelength corresponding to the unwanted modulation frequency can correspond to the color blue. The filtering component 155 can include an optical filter that can selectively transmit light in a particular range of wavelengths or colors, while blocking one or more other ranges of wavelengths or colors. The optical filter can modify the magnitude or phase of the incoming light wave for a range of wavelengths. For example, the optical filter can be configured to block, reflect or attenuate the blue light wave corresponding to the unwanted modulation frequency. The light adjustment module 115 can change the wavelength of the light wave generated by the light generation module 110 and light source 305 such that the desired modulation frequency is not blocked or attenuated by the unwanted frequency filtering module 120.

NSS Operating with a Virtual Reality Headset [00247] The NSS 105 can operate in conjunction with the virtual reality headset 401 including a light source 305 as depicted in FIG. 4C. The NSS 105 can operate in conjunction with the virtual reality headset 401 including a light source 305 and a feedback sensor 605 as depicted in FIG. 4C. In some embodiments, the NSS 105 can determine that the visual signaling component 150 hardware includes a virtual reality headset 401. Responsive to determining that the visual signaling component 150 includes a virtual reality headset 401, the NSS 105 can determine that the light source 305 includes a display screen of a smartphone or other mobile computing device. [00248] The virtual reality headset 401 can provide an immersive, non-disruptive visual stimulation experience. The virtual reality headset 401 can provide an augmented reality experience. The feedback sensors 605 can capture pictures or video of the physical, real world to provide the augmented reality experience. The unwanted frequency filtering module 120 can filter out unwanted modulation frequencies prior to projecting, displaying or providing the augmented reality images via the display screen 305.

[00249] In operation, a user of the frame 401 can wear the frame 401 on their head such that the virtual reality headset eye sockets 465 cover the user’s eyes. The virtual reality headset eye sockets 465 can encircle or substantially encircle their eyes. The user can secure the virtual reality headset 401 to the user’s headset using one or more straps 455 or 460, a skull cap, or other fastening mechanism. In some cases, the user can provide an indication to the NS S 105 that the virtual reality headset 401 has been placed and secured to the user’s head and that the user is ready to undergo brainwave entrainment. The indication can include an instruction, command, selection, input, or other indication via an input/output interface, such as a keyboard 726, pointing device 727, or other I/O devices 730a-n. The indication can be a motion-based indication, visual indication, or voice-based indication. For example, the user can provide a voice command that indicates that the user is ready to undergo brainwave entrainment.

[00250] In some cases, the feedback sensor 605 can determine that the user is ready to undergo brainwave entrainment. The feedback sensor 605 can detect that the virtual reality headset 401 has been placed on a user’s head. The NSS 105 can receive motion data, acceleration data, gyroscope data, temperature data, or capacitive touch data to determine that the virtual reality headset 401 has been placed on the user’s head. The received data, such as motion data, can indicate that the virtual reality headset 401 was picked up and placed on the user’s head. The temperature data can measure the temperature of or proximate to the virtual reality headset 401, which can indicate that the virtual reality headset 401 is on the user’s head. In some cases, the feedback sensor 605 can perform eye tracking to determine a level of attention a user is paying to the light source 305 or feedback sensor 605. The NSS 105 can detect that the user is ready responsive to determining that the user is paying a high level of attention to the light source 305 or feedback sensor 605. For example, staring at, gazing or looking in the direction of the light source 305 or feedback sensor 605 can provide an indication that the user is ready to undergo brainwave entrainment. [00251] In some embodiments, a sensor 605 on the straps 455, straps 460 or eye socket 605 can detect that the virtual reality headset 401 is secured, placed, or positioned on the user’s head. The sensor 605 can be a touch sensor that senses or detects the touch of the user’s head. [00252] Thus, the NS S 105 can detect or determine that the virtual reality headset 401 has been worn and that the user is in a ready state, or the NSS 105 can receive an indication or confirmation from the user that the user has worn the virtual reality headset 401 and the user is ready to undergo brainwave entrainment. Upon determining that the user is ready, the NSS 105 can initialize the brainwave entrainment process. In some embodiments, the NSS 105 can access a profile data structure 145. For example, a profile manager 125 can query the profile data structure 145 to determine one or more parameter for the external visual stimulation used for the brain entrainment process. Parameters can include, for example, a type of visual stimulation, an intensity of the visual stimulation, frequency of the visual stimulation, duration of the visual stimulation, or wavelength of the visual stimulation. The profile manager 125 can query the profile data structure 145 to obtain historical brain entrainment information, such as prior visual stimulation sessions. The profile manager 125 can perform a lookup in the profile data structure 145. The profile manager 125 can perform a look-up with a username, user identifier, location information, fingerprint, biometric identifier, retina scan, voice recognition and authentication, or other identifying technique. [00253] The NSS 105 can determine a type of external visual stimulation based on the hardware 401. The NSS 105 can determine the type of external visual stimulation based on the type of light source 305 available. For example, if the light source 305 includes a smartphone or display device, the visual stimulation can include turning on and off the display screen of the display device. The visual stimulation can include displaying a pattern on the display device 305, such as a checkered pattern, that can alternate in accordance with the desired frequency modulation. The visual stimulation can include light pulses generated by a light source 305 such as an LED that is placed within the virtual reality headset 401 enclosure. [00254] In cases where the virtual reality headset 401 provides an augmented reality experience, the visual stimulation can include overlaying content on the display device and modulating the overlaid content at the desired modulation frequency. For example, the virtual reality headset 401 can include a camera 605 that captures the real, physical world. While displaying the captured image of the real, physical world, the NSS 105 can also display content that is modulated at the desired modulation frequency. The NSS 105 can overlay the content modulated at the desired modulation frequency. The NSS 105 can otherwise modify, manipulate, modulation, or adjust a portion of the display screen or a portion of the augmented reality to generate or provide the desired modulation frequency.

[00255] For example, the NSS 105 can modulate one or more pixels based on the desired modulation frequency. The NSS 105 can turn pixels on and off based on the modulation frequency. The NSS 105 can turn of pixels on any portion of the display device. The NSS 105 can turn on and off pixels in a pattern. The NSS 105 can turn on and off pixels in the direct visual field or peripheral visual field. The NSS 105 can track or detect a gaze direction of the eye and turn on and off pixels in the gaze direction so the light pulses (or modulation) are in the direct vision field. Thus, modulating the overlaid content or otherwise manipulated the augmented reality display or other image provided via a display device in the virtual reality headset 401 can generate light pulses or light flashes having a modulation frequency configured to facilitate brainwave entrainment.

[00256] The NSS 105 can determine, via the profile manager 125, a modulation frequency for the pulse train or the ambient light. For example, NSS 105 can determine, from the profile data structure 145, that the modulation frequency for the external visual stimulation should be set to 40 Hz. Depending on the type of visual stimulation, the profile data structure 145 can further indicate a number of pixels to modulate, intensity of pixels to modulate, pulse length, intensity, wavelength of the light wave forming the light pulse, or duration of the pulse train.

[00257] In some cases, the NSS 105 can determine or adjust one or more parameter of the external visual stimulation. For example, the NSS 105 ( e.g ., via feedback component 160 or feedback sensor 605) can determine a level or amount of light in captured image used to provide the augmented reality experience. The NSS 105 (e.g., via light adjustment module 115 or side effects management module 130) can establish, initialize, set, or adjust the intensity or wavelength of the light pulse based on the light level in the image data corresponding to the augmented reality experience. For example, the NSS 105 can determine that there is a low level of light in the augmented reality display because it may be dark outside. Due to the low level of light in the augmented reality display, the user’s pupils may be dilated. The NSS 105 can determine, based on detecting a low level of light, that the user’s pupils are likely dilated. In response to determining that the user’s pupils are likely dilated, the NSS 105 can set a low level of intensity for the light pulses or light source providing the modulation frequency. The NSS 105 can further use a light wave having a longer wavelength (e.g, red), which may reduce strain on the eyes. [00258] In some embodiments, the NSS 105 can monitor ( e.g ., via feedback monitor 135 and feedback component 160) the level of light throughout the brainwave entrainment process to automatically and periodically adjust the intensity or color of light pulses. For example, if the user began the brainwave entrainment process when there was a high level of ambient light, the NSS 105 can initially set a higher intensity level for the light pulses and use a color that includes light waves having lower wavelengths (e.g., blue). However, as the light level decreases throughout the brainwave entrainment process, the NSS 105 can automatically detect the decrease in light and, in response to the detection, adjust or lower the intensity while increasing the wavelength of the light wave. The NSS 105 can adjust the light pulses to provide a high contrast ratio to facilitate brainwave entrainment.

[00259] In some embodiments, the NSS 105 (e.g, via feedback monitor 135 and feedback component 160) can monitor or measure physiological conditions to set or adjust a parameter of the light pulses while the user is wearing the virtual reality headset 401. For example, the NSS 105 can monitor or measure a level of pupil dilation to adjust or set a parameter of the light wave. In some embodiments, the NSS 105 can monitor or measure, via one or more feedback sensor of the virtual reality headset 401 or other feedback sensor, a heart rate, pulse rate, blood pressure, body temperature, perspiration, or brain activity to set or adjust a parameter of the light wave.

[00260] In some embodiments, the NSS 105 can be preconfigured to initially transmit, via display device 305, light pulses having a lowest setting for light wave intensity (e.g, low amplitude of the light wave or high wavelength of the light wave) and gradually increase the intensity (e.g, increase the amplitude of the light wave or decrease the wavelength of the light wave) while monitoring feedback until an optimal light intensity is reached. An optimal light intensity can refer to a highest intensity without adverse physiological side effects, such as blindness, seizures, heart attack, migraines, or other discomfort. The NSS 105 (e.g, via side effects management module 130) can monitor the physiological symptoms to identify the adverse side effects of the external visual stimulation, and adjust (e.g, via light adjustment module 115) the external visual stimulation accordingly to reduce or eliminate the adverse side effects.

[00261] In some embodiments, the NSS 105 (e.g, via light adjustment module 115) can adjust a parameter of the light wave or light pulse based on a level of attention. For example, during the brainwave entrainment process, the user may get bored, lose focus, fall asleep, or otherwise not pay attention to the light pulses generated via the display screen 305 of the virtual reality headset 401. Not paying attention to the light pulses may reduce the efficacy of the brainwave entrainment process, resulting in neurons oscillating at a frequency different from the desired modulation frequency of the light pulses.

[00262] NSS 105 can detect the level of attention the user is paying or providing to the light pulses using the feedback monitor 135 and one or more feedback component 160 ( e.g ., including feedback sensors 605). The NSS 105 can perform eye tracking to determine the level of attention the user is providing to the light pulses based on the gaze direction of the retina or pupil. The NSS 105 can measure eye movement to determine the level of attention the user is paying to the light pulses. The NSS 105 can provide a survey or prompt asking for user feedback that indicates the level of attention the user is paying to the light pulses. Responsive to determining that the user is not paying a satisfactory amount of attention to the light pulses (e.g., a level of eye movement that is greater than a threshold or a gaze direction that is outside the direct visual field of the light source 305), the light adjustment module 115 can change a parameter of the light source 305 or display device 305 to gain the user’s attention. For example, the light adjustment module 115 can increase the intensity of the light pulse, adjust the color of the light pulse, or change the duration of the light pulse. The light adjustment module 115 can randomly vary one or more parameters of the light pulse. The light adjustment module 115 can initiate an attention seeking light sequence configured to regain the user’s attention. For example, the light sequence can include a change in color or intensity of the light pulses in a predetermined, random, or pseudo-random pattern. The attention seeking light sequence can enable or disable different light sources if the visual signaling component 150 includes multiple light sources. Thus, the light adjustment module 115 can interact with the feedback monitor 135 to determine a level of attention the user is providing to the light pulses, and adjust the light pulses to regain the user’s attention if the level of attention falls below a threshold.

[00263] In some embodiments, the light adjustment module 115 can change or adjust one or more parameter of the light pulse or light wave at predetermined time intervals (e.g, every 5 minutes, 10 minutes, 15 minutes, or 20 minutes) to regain or maintain the user’s attention level.

[00264] In some embodiments, the NSS 105 (e.g, via unwanted frequency filtering module 120) can filter, block, attenuate, or remove unwanted visual external stimulation. Unwanted visual external stimulation can include, for example, unwanted modulation frequencies, unwanted intensities, or unwanted wavelengths of light waves. The NSS 105 can deem a modulation frequency to be unwanted if the modulation frequency of a pulse train is different or substantially different (e.g., 1%, 2%, 5%, 10%, 15%, 20%, 25%, or more than 25%) from a desired frequency.

[00265] For example, the desired modulation frequency for brainwave entrainment can be 40 Hz. However, a modulation frequency of 20 Hz or 80 Hz can hinder brainwave entrainment. Thus, the NSS 105 can filter out the light pulses or light waves corresponding to the 20 Hz or 80 Hz modulation frequency. For example, the virtual reality headset 401 can detect unwanted modulation frequencies in the physical, real world and eliminate, attenuate, filter out or otherwise remove the unwanted frequencies providing to generating the or providing the augmented reality experience. The NSS 105 can include an optical filter configured to perform digital signal processing or digital image processing to detect the unwanted modulation frequency in the real world captured by the feedback sensor 605. The NSS 105 can detect other content, image or motion having an unwanted parameter (e.g, color, brightness, contrast ratio, modulation frequency), and eliminate same from the augmented reality experience projected to the user via the display screen 305. The NSS 105 can apply a color filter to adjust the color or remove a color of the augmented reality display. The NSS 105 can adjust, modify, or manipulate the brightness, contrast ratio, sharpness, tint, hue, or other parameter of the image or video displayed via the display device 305.

[00266] In some embodiments, the NSS 105 can detect, via feedback component 160, that there is captured image or video content from the real, physical world that corresponds to an unwanted modulation frequency of 20 Hz. The NSS 105 can further determine the wavelength of the light waves of the light pulses corresponding to the unwanted modulation frequency. The NSS 105 can instruct the filtering component 155 to filter out the wavelength corresponding to the unwanted modulation frequency. For example, the wavelength corresponding to the unwanted modulation frequency can correspond to the color blue. The filtering component 155 can include a digital optical filter that can digitally remove content or light in a particular range of wavelengths or colors, while allowing one or more other ranges of wavelengths or colors. The digital optical filter can modify the magnitude or phase of the image for a range of wavelengths. For example, the digital optical filter can be configured to attenuate, erase, replace or otherwise alter the blue light wave corresponding to the unwanted modulation frequency. The light adjustment module 115 can change the wavelength of the light wave generated by the light generation module 110 and display device 305 such that the desired modulation frequency is not blocked or attenuated by the unwanted frequency filtering module 120. NSS Operating With a Tablet

[00267] The NSS 105 can operate in conjunction with the tablet 500 as depicted in FIGs. 5A-5D. In some embodiments, the NSS 105 can determine that the visual signaling component 150 hardware includes a tablet device 500 or other display screen that is not affixed or secured to a user’s head. The tablet 500 can include a display screen that has one or more component or function of the display screen 305 or light source 305 depicted in conjunction with FIGs. 4A and 4C. The light source 305 in a tablet can be the display screen. The tablet 500 can include one or more feedback sensor that includes one or more component or function of the feedback sensor depicted in conjunction with FIGs. 4B, 4C and 6A.

[00268] The tablet 500 can communicate with the NSS 105 via a network, such as a wireless network or a cellular network. The NSS 105 can, in some embodiments, execute the NSS 105 or a component thereof. For example, the tablet 500 can launch, open or switch to an application or resource configured to provide at least one functionality of the NSS 105. The tablet 500 can execute the application as a background process or a foreground process. For example, the graphical user interface for the application can be in the background while the application causes the display screen 305 of the tablet to overlay content or light that changes or modulates at a desired frequency for brain entrainment ( e.g ., 40 Hz).

[00269] The tablet 500 can include one or more feedback sensors 605. In some embodiments, the tablet can use the one or more feedback sensors 605 to detect that a user is holding the tablet 500. The tablet can use the one or more feedback sensors 605 to determine a distance between the light source 305 and the user. The tablet can use the one or more feedback sensors 605 to determine a distance between the light source 305 and the user’s head. The tablet can use the one or more feedback sensors 605 to determine a distance between the light source 305 and the user’s eyes.

[00270] In some embodiments, the tablet 500 can use a feedback sensor 605 that includes a receiver to determine the distance. The tablet can transmit a signal and measure the amount of time it takes for the transmitted signal to leave the tablet 500, bounce on the object (e.g., user’s head) and be received by the feedback sensor 605. The tablet 500 or NSS 105 can determine the distance based on the measured amount of time and the speed of the transmitted signal (e.g, speed of light).

[00271] In some embodiments, the tablet 500 can include two feedback sensors 605 to determine a distance. The two feedback sensors 605 can include a first feedback sensor 605 that is the transmitter and a second feedback sensor that is the receiver. [00272] In some embodiments, the tablet 500 can include two or more feedback sensors 605 that include two or more cameras. The two or more cameras can measure the angles and the position of the object ( e.g ., the user’s head) on each camera, and use the measured angles and position to determine or compute the distance between the tablet 500 and the object. [00273] In some embodiments, the tablet 500 (or application thereof) can determine the distance between the tablet and the user’s head by receiving user input. For example, user input can include an approximate size of the user’s head. The tablet 500 can then determine the distance from the user’s head based on the inputted approximate size.

[00274] The tablet 500, application, or NSS 105 can use the measured or determined distance to adjust the light pulses or flashes of light emitted by the light source 305 of the tablet 500. The tablet 500, application, or NSS 105 can use the distance to adjust one or more parameter of the light pulses, flashes of light or other content emitted via the light source 305 of the tablet 500. For example, the tablet 500 can adjust the intensity of the light pulses emitted by light source 305 based on the distance. The tablet 500 can adjust the intensity based on the distance in order to maintain a consistent or similar intensity at the eye irrespective of the distance between the light source 305 and the eye. The tablet can increase the intensity proportional to the square of the distance.

[00275] The tablet 500 can manipulate one or more pixels on the display screen 305 to generate the light pulses or modulation frequency for brainwave entrainment. The tablet 500 can overlay light sources, light pulses or other patterns to generate the modulation frequency for brainwave entrainment. Similar to the virtual reality headset 401, the tablet can filter out or modify unwanted frequencies, wavelengths or intensity.

[00276] Similar to the frames 400, the tablet 500 can adjust a parameter of the light pulses or flashes of light generated by the light source 305 based on ambient light, environmental parameters, or feedback.

[00277] In some embodiments, the tablet 500 can execute an application that is configured to generate the light pulses or modulation frequency for brainwave entrainment. The application can execute in the background of the tablet such that all content displayed on a display screen of the tablet are displayed as light pulses at the desired frequency. The tablet can be configured to detect a gaze direction of the user. In some embodiments, the tablet may detect the gaze direction by capturing an image of the user’s eye via the camera of the tablet. The tablet 500 can be configured to generate light pulses at particular locations of the display screen based on the gaze direction of the user. In embodiments where direct vision field is to be employed, the light pulses can be displayed at locations of the display screen that correspond to the user’s gaze. In embodiments where peripheral vision field is to be employed, the light pulses can be displayed at locations of the displays screen that are outside the portion of the display screen corresponding to the user’s gaze.

Neural Stimulation via Auditory Stimulation

[00278] FIG. 9 is a block diagram depicting a system for neural stimulation via auditory stimulation in accordance with an embodiment. The system 900 can include a neural stimulation system (“NSS”) 905. The NSS 905 can be referred to as an auditory NSS 905 or NSS 905. In brief overview, the auditory neural stimulation system (“NSS”) 905 can include, access, interface with, or otherwise communicate with one or more of an audio generation module 910, audio adjustment module 915, unwanted frequency filtering module 920, profile manager 925, side effects management module 930, feedback monitor 935, data repository 940, audio signaling component 950, filtering component 955, or feedback component 960. The audio generation module 910, audio adjustment module 915, unwanted frequency filtering module 920, profile manager 925, side effects management module 930, feedback monitor 935, audio signaling component 950, filtering component 955, or feedback component 960 can each include at least one processing unit or other logic device such as programmable logic array engine, or module configured to communicate with the database repository 950. The audio generation module 910, audio adjustment module 915, unwanted frequency filtering module 920, profile manager 925, side effects management module 930, feedback monitor 935, audio signaling component 950, filtering component 955, or feedback component 960 can be separate components, a single component, or part of the NSS 905. The system 100 and its components, such as the NSS 905, may include hardware elements, such as one or more processors, logic devices, or circuits. The system 100 and its components, such as the NSS 905, can include one or more hardware or interface component depicted in system 700 in FIGs. 7A and 7B. For example, a component of system 100 can include or execute on one or more processors 721, access storage 728 or memory 722, and communicate via network interface 718.

[00279] Still referring to FIG. 9, and in further detail, the NSS 905 can include at least one audio generation module 910. The audio generation module 910 can be designed and constructed to interface with an audio signaling component 950 to provide instructions or otherwise cause or facilitate the generation of an audio signal, such as an audio burst, audio pulse, audio chirp, audio sweep, or other acoustic wave having one or more predetermined parameters. The audio generation module 910 can include hardware or software to receive and process instructions or data packets from one or more module or component of the NS S 905. The audio generation module 910 can generate instructions to cause the audio signaling component 950 to generate an audio signal. The audio generation module 910 can control or enable the audio signaling component 950 to generate the audio signal having one or more predetermined parameters.

[00280] The audio generation module 910 can be communicatively coupled to the audio signaling component 950. The audio generation module 910 can communicate with the audio signaling component 950 via a circuit, electrical wire, data port, network port, power wire, ground, electrical contacts or pins. The audio generation module 910 can wirelessly communicate with the audio signaling component 950 using one or more wireless protocols such as BlueTooth, BlueTooth Low Energy, Zigbee, Z-Wave, IEEE 802, WIFI, 3G, 4G, LTE, near field communications (“NFC”), or other short, medium or long range communication protocols, etc. The audio generation module 910 can include or access network interface 718 to communicate wirelessly or over a wire with the audio signaling component 950.

[00281] The audio generation module 910 can interface, control, or otherwise manage various types of audio signaling components 950 in order to cause the audio signaling component 950 to generate, block, control, or otherwise provide the audio signal having one or more predetermined parameters. The audio generation module 910 can include a driver configured to drive an audio source of the audio signaling component 950. For example, the audio source can include a speaker, and the audio generation module 910 (or the audio signaling component) can include a transducer that converts electrical energy to sound waves or acoustic waves. The audio generation module 910 can include a computing chip, microchip, circuit, microcontroller, operational amplifiers, transistors, resistors, or diodes configured to provide electricity or power having certain voltage and current characteristics to drive the speaker to generate an audio signal with desired acoustic characteristics.

[00282] In some embodiments, the audio generation module 910 can instruct the audio signaling component 950 to provide an audio signal. For example, the audio signal can include an acoustic wave 1000 as depicted in FIG. 10A. The audio signal can include multiple acoustic waves. The audio signal can generate one or more acoustic waves. The acoustic wave 1000 can include or be formed of a mechanical wave of pressure and displacement that travels through media such as gases, liquids, and solids. The acoustic wave can travel through a medium to cause vibration, sound, ultrasound or infrasound. The acoustic wave can propagate through air, water or solids as longitudinal waves. The acoustic wave can propagate through solids as a transverse wave. [00283] The acoustic wave can generate sound due to the oscillation in pressure, stress, particle displacement, or particle velocity propagated in a medium with internal forces ( e.g ., elastic or viscous), or the superposition of such propagated oscillation. Sound can refer to the auditory sensation evoked by this oscillation. For example, sound can refer to the reception of acoustic waves and their perception by the brain.

[00284] The audio signaling component 950 or audio source thereof can generate the acoustic waves by vibrating a diaphragm of the audio source. For example, the audio source can include a diaphragm such as a transducer configured to inter-convert mechanical vibrations to sounds. The diaphragm can include a thin membrane or sheet of various materials, suspended at its edges. The varying pressure of sound waves imparts mechanical vibrations to the diaphragm which can then create acoustic waves or sound.

[00285] The acoustic wave 1000 illustrated in FIG. 10A includes a wavelength 1010. The wavelength 1010 can refer to a distance between successive crests 1020 of the wave. The wavelength 1010 can be related to the frequency of the acoustic wave and the speed of the acoustic wave. For example, the wavelength can be determined as the quotient of the speed of the acoustic wave divided by the frequency of the acoustic wave. The speed of the acoustic wave can the product of the frequency and the wavelength. The frequency of the acoustic wave can be the quotient of the speed of the acoustic wave divided by the wavelength of the acoustic wave. Thus, the frequency and the wavelength of the acoustic wave can be inversely proportional. The speed of sound can vary based on the medium through which the acoustic wave propagates. For example, the speed of sound in air can be 343 meters per second. [00286] A crest 1020 can refer to the top of the wave or point on the wave with the maximum value. The displacement of the medium is at a maximum at the crest 1020 of the wave. The trough 1015 is the opposite of the crest 1020. The trough 1015 is the minimum or lowest point on the wave corresponding to the minimum amount of displacement.

[00287] The acoustic wave 1000 can include an amplitude 1005. The amplitude 1005 can refer to a maximum extent of a vibration or oscillation of the acoustic wave 1000 measured from a position of equilibrium. The acoustic wave 1000 can be a longitudinal wave if it oscillates or vibrates in the same direction of travel 1025. In some cases, the acoustic wave 1000 can be a transverse wave that vibrates at right angles to the direction of its propagation. [00288] The audio generation module 910 can instruct the audio signaling component 950 to generate acoustic waves or sound waves having one or more predetermined amplitude or wavelength. Wavelengths of the acoustic wave that are audible to the human ear range from approximately 17 meters to 17 millimeters (or 20 Hz to 20 kHz). The audio generation module 910 can further specify one or more properties of an acoustic wave within or outside the audible spectrum. For example, the frequency of the acoustic wave can range from 0 to 50 kHz In some embodiments, the frequency of the acoustic wave can range from 8 to 12 kHz. In some embodiments, the frequency of the acoustic wave can be 10 kHz.

[00289] The NSS 905 can modulate, modify, change or otherwise alter properties of the acoustic wave 1000. For example, the NSS 905 can modulate the amplitude or wavelength of the acoustic wave. As depicted in FIG. 10B and FIG. IOC, the NSS 905 can adjust, manipulate, or otherwise modify the amplitude 1005 of the acoustic wave 1000. For example, the NSS 905 can lower the amplitude 1005 to cause the sound to be quieter, as depicted in FIG. 10B, or increase the amplitude 1005 to cause the sound to be louder, as depicted in FIG. IOC

[00290] In some cases, the NSS 905 can adjust, manipulate or otherwise modify the wavelength 1010 of the acoustic wave. As depicted in FIG. 10D and FIG. 10E, the NSS 905 can adjust, manipulate, or otherwise modify the wavelength 1010 of the acoustic wave 1000. For example, the NSS 905 can increase the wavelength 1010 to cause the sound to have a lower pitch, as depicted in FIG. 10D, or reduce the wavelength 1010 to cause the sound to have a higher pitch, as depicted in FIG. 10E.

[00291] The NSS 905 can modulate the acoustic wave. Modulating the acoustic wave can include modulating one or more properties of the acoustic wave. Modulating the acoustic wave can include filtering the acoustic wave, such as filtering out unwanted frequencies or attenuating the acoustic wave to lower the amplitude. Modulating the acoustic wave can include adding one or more additional acoustic waves to the original acoustic wave. Modulating the acoustic wave can include combining the acoustic wave such that there is constructive or destructive interference where the resultant, combined acoustic wave corresponds to the modulated acoustic wave.

[00292] The NSS 905 can modulate or change one or more properties of the acoustic wave based on a time interval. The NSS 905 can change the one or more properties of the acoustic at the end of the time interval. For example, the NSS 905 can change a property of the acoustic wave every 30 seconds, 1 minute, 2 minutes, 3 minutes, 5 minutes, 7 minutes, 10 minutes, or 15 minutes. The NSS 905 can change a modulation frequency of the acoustic wave, where the modulation frequency refers to the repeated modulations or inverse of the pulse rate interval of the acoustic pulses. The modulation frequency can be a predetermined or desired frequency. The modulation frequency can correspond to a desired stimulation frequency of neural oscillations. The modulation frequency can be set to facilitate or cause brainwave entrainment. The NSS 905 can set the modulation frequency to a frequency in the range of 0.1 Hz to 10,000 Hz. For example, the NSS 905 can set the modulation frequency to .1 Hz, 1 Hz, 5 Hz, 10 Hz, 20 Hz, 25 Hz, 30 Hz, 31 Hz, 32 Hz, 33 Hz, 34 Hz, 35 Hz, 36 Hz, 37 Hz, 38 Hz, 39 Hz, 40 Hz, 41 Hz, 42 Hz, 43 Hz, 44 Hz, 45 Hz, 46 Hz, 47 Hz, 48 Hz, 49 Hz, 50 Hz, 60 Hz, 70 Hz, 80 Hz, 90 Hz, 100 Hz, 150 Hz, 200 Hz, 250 Hz, 300 Hz, 400 Hz, 500 Hz, 1000 Hz, 2000 Hz, 3000 Hz, 4,000 Hz, 5000 Hz, 6,000 Hz, 7,000 Hz, 8,000 Hz, 9,000 Hz, or 10,000 Hz.

[00293] The audio generation module 910 can determine to provide audio signals that include bursts of acoustic waves, audio pulses, or modulations to acoustic waves. The audio generation module 910 can instruct or otherwise cause the audio signaling component 950 to generate acoustic bursts or pulses. An acoustic pulse can refer to a burst of acoustic waves or a modulation to a property of an acoustic wave that is perceived by the brain as a change in sound. For example, an audio source that is intermittently turned on and off can create audio bursts or changes in sound. The audio source can be turned on and off based on a predetermined or fixed pulse rate interval, such as every 0.025 seconds, to provide a pulse repetition frequency of 40 Hz. The audio source can be turned on and off to provide a pulse repetition frequency in the range of 0.1 Hz to 10 kHz or more.

[00294] For example, FIGs. 10F-10I illustrates bursts of acoustic waves or bursts of modulations that can be applied to acoustic waves. The bursts of acoustic waves can include, for example, audio tones, beeps, or clicks. The modulations can refer to changes in the amplitude of the acoustic wave, changes in frequency or wavelength of the acoustic wave, overlaying another acoustic wave over the original acoustic wave, or otherwise modifying or changing the acoustic wave.

[00295] For example, FIG. 10F illustrates acoustic bursts 1035a-c (or modulation pulses 1035a-c) in accordance with an embodiment. The acoustic bursts 1035a-c can be illustrated via a graph where the y-axis represents a parameter of the acoustic wave ( e.g ., frequency, wavelength, or amplitude) of the acoustic wave. The x-axis can represent time (e.g., seconds, milliseconds, or microseconds).

[00296] The audio signal can include a modulated acoustic wave that is modulated between different frequencies, wavelengths, or amplitudes. For example, the NSS 905 can modulate an acoustic wave between a frequency in the audio spectrum, such as Ma, and a frequency outside the audio spectrum, such as Mo. The NSS 905 can modulate the acoustic wave between two or more frequencies, between an on state and an off state, or between a high power state and a low power state. [00297] The acoustic bursts 1035a-c can have an acoustic wave parameter with value Ma that is different from the value Mo of the acoustic wave parameter. The modulation Ma can refer to a frequency or wavelength, or amplitude. The pulses 1035a-c can be generated with a pulse rate interval (PRI) 1040.

[00298] For example, the acoustic wave parameter can be the frequency of the acoustic wave. The first value Mo can be a low frequency or carrier frequency of the acoustic wave, such as 10 kHz. The second value, Ma, can be different from the first frequency Mo. The second frequency Ma can be lower or higher than the first frequency Mo. For example, the second frequency Ma can be 11 kHz. The difference between the first frequency and the second frequency can be determined or set based on a level of sensitivity of the human ear. The difference between the first frequency and the second frequency can be determined or set based on profile information 945 for the subject. The difference between the first frequency Mo and the second frequency Ma can be determined such that the modulation or change in the acoustic wave facilitate brainwave entrainment.

[00299] In some cases, the parameter of the acoustic wave used to generate the acoustic burst 1035a can be constant at Ma, thereby generating a square wave as illustrated in FIG. 10F. In some embodiments, each of the three pulses 1035a-c can include acoustic waves having a same frequency Ma.

[00300] The width of each of the acoustic bursts or pulses ( e.g . , the duration of the burst of the acoustic wave with the parameter Ma) can correspond to a pulse width 1030a. The pulse width 1030a can refer to the length or duration of the burst. The pulse width 1030a can be measured in units of time or distance. In some embodiments, the pulses 1035a-c can include acoustic waves having different frequencies from one another. In some embodiments, the pulses 1035a-c can have different pulse widths 1030a from one another, as illustrated in FIG. 10G. For example, a first pulse 1035d of FIG. 10G can have a pulse width 1030a, while a second pulse 1035e has a second pulse width 1030b that is greater than the first pulse width 1030a. A third pulse 1035f can have a third pulse width 1030c that is less than the second pulse width 1030b. The third pulse width 1030c can also be less than the first pulse width 1030a. While the pulse widths 1030a-c of the pulses 1035d-f of the pulse train may vary, the audio generation module 910 can maintain a constant pulse rate interval 1040 for the pulse train.

[00301] The pulses 1035a-c can form a pulse train having a pulse rate interval 1040. The pulse rate interval 1040 can be quantified using units of time. The pulse rate interval 1040 can be based on a frequency of the pulses of the pulse train 201. The frequency of the pulses of the pulse train 201 can be referred to as a modulation frequency. For example, the audio generation module 910 can provide a pulse train 201 with a predetermined frequency, such as 40 Hz. To do so, the audio generation module 910 can determine the pulse rate interval 1040 by taking the multiplicative inverse (or reciprocal) of the frequency ( e.g ., 1 divided by the predetermined frequency for the pulse train). For example, the audio generation module 910 can take the multiplicative inverse of 40 Hz by dividing 1 by 40 Hz to determine the pulse rate interval 1040 as 0.025 seconds. The pulse rate interval 1040 can remain constant throughout the pulse train. In some embodiments, the pulse rate interval 1040 can vary throughout the pulse train or from one pulse train to a subsequent pulse train. In some embodiments, the number of pulses transmitted during a second can be fixed, while the pulse rate interval 1040 varies.

[00302] In some embodiments, the audio generation module 910 can generate an audio burst or audio pulse having an acoustic wave that varies in frequency, amplitude, or wavelength. For example, the audio generation module 910 can generate up-chirp pulses where the frequency, amplitude or wavelength of the acoustic wave of the audio pulse increases from the beginning of the pulse to the end of the pulse as illustrated in FIG. 10H. For example, the frequency, amplitude or wavelength of the acoustic wave at the beginning of pulse 1035g can be Ma. The frequency, amplitude or wavelength of the acoustic wave of the pulse 1035g can increase from Ma to Mb in the middle of the pulse 1035g, and then to a maximum of Me at the end of the pulse 1035g. Thus, the frequency, amplitude or wavelength of the acoustic wave used to generate the pulse 1035g can range from Ma to Me. The frequency, amplitude or wavelength can increase linearly, exponentially, or based on some other rate or curve. One or more of the frequency, amplitude or wavelength of the acoustic wave can change from the beginning of the pulse to the end of the pulse.

[00303] The audio generation module 910 can generate down-chirp pulses, as illustrated in FIG. 101, where the frequency, amplitude or wavelength of the acoustic wave of the acoustic pulse decreases from the beginning of the pulse to the end of the pulse. For example, the frequency, amplitude or wavelength of an acoustic wave at the beginning of pulse 1035j can be Me. The frequency, amplitude or wavelength of the acoustic wave of the pulse 1035j can decrease from Me to Mb in the middle of the pulse 1035j , and then to a minimum of Ma at the end of the pulse 1035j . Thus, the frequency, amplitude or wavelength of the acoustic wave used to generate the pulse 1035j can range from Me to Ma. The frequency, amplitude or wavelength can decrease linearly, exponentially, or based on some other rate or curve. One or more of the frequency, amplitude or wavelength of the acoustic wave can change from the beginning of the pulse to the end of the pulse.

[00304] In some embodiments, the audio generation module 910 can instruct or cause the audio signaling component 950 to generate audio pulses to stimulate specific or predetermined portions of the brain or a specific cortex. The frequency, wavelength, modulation frequency, amplitude and other aspects of the audio pulse, tone or music based stimuli can dictate which cortex or cortices are recruited to process the stimuli. The audio signaling component 950 can stimulate discrete portions of the cortex by modulating the presentation of the stimuli to target specific or general regions of interest. The modulation parameters or amplitude of the audio stimuli can dictate which region of the cortex is stimulated. For example, different regions of the cortex are recruited to process different frequencies of sound, called their characteristic frequencies. Further, ear laterality of stimulation can have an effect on cortex response since some subjects can be treated by stimulating one ear as opposed to both ears.

[00305] Audio signaling component 950 can be designed and constructed to generate the audio pulses responsive to instructions from the audio generation module 910. The instructions can include, for example, parameters of the audio pulse such as a frequency, wavelength or of the acoustic wave, duration of the pulse, frequency of the pulse train, pulse rate interval, or duration of the pulse train ( e.g ., a number of pulses in the pulse train or the length of time to transmit a pulse train having a predetermined frequency). The audio pulse can be perceived, observed, or otherwise identified by the brain via cochlear means such as ears. The audio pulses can be transmitted to the ear via an audio source speaker in close proximity to the ear, such as headphones, earbuds, bone conduction transducers, or cochlear implants. The audio pulses can be transmitted to the ear via an audio source or speaker not in close proximity to the ear, such as a surround sound speaker system, bookshelf speakers, or other speaker not directly or indirectly in contact with the ear.

[00306] FIG. 11A illustrates audio signals using binaural beats or binaural pulses, in accordance with an embodiment. In brief summary, binaural beats refers to providing a different tone to each ear of the subject. When the brain perceives the two different tones, the brain mixes the two tones together to create a pulse. The two different tones can be selected such that the sum of the tones creates a pulse train having a desired pulse rate interval 1040. [00307] The audio signaling component 950 can include a first audio source that provides an audio signal to the first ear of a subject, and a second audio source that provides a second audio signal to the second ear of a subject. The first audio source and the second audio source can be different. The first ear may only perceive the first audio signal from the first audio source, and the second ear may only receive the second audio signal from the second audio source. Audio sources can include, for example, headphones, earbuds, or bone conduction transducers. The audio sources can include stereo audio sources.

[00308] The audio generation component 910 can select a first tone for the first ear and a different second tone for the second ear. A tone can be characterized by its duration, pitch, intensity (or loudness), or timbre (or quality). In some cases, the first tone and the second tone can be different if they have different frequencies. In some cases, the first tone and the second tone can be different if they have different phase offsets. The first tone and the second tone can each be pure tones. A pure tone can be a tone having a sinusoidal waveform with a single frequency.

[00309] As illustrated in FIG. 11 A, the first tone or offset wave 1105 is slightly different from the second tone 1110 or carrier wave 1110. The first tone 1105 has a higher frequency than the second tone 1110. The first tone 1105 can be generated by a first earbud that is inserted into one of the subject’s ears, and the second tone 1110 can be generated by a second earbud that is inserted into the other of the subject’s ears. When the auditory cortex of the brain perceives the first tone 1105 and the second tone 1110, the brain can sum the two tones. The brain can sum the acoustic waveforms corresponding to the two tones. The brain can sum the two waveforms as illustrated by waveform sum 1115. Due to the first and second tones having a different parameter (such as a different frequency or phase offset), portions of the waves can add and subtract from another to result in waveform 1115 having one or more pulses 1130 (or beats 1130). The pulses 1130 can be separated by portions 1125 that are at equilibrium. The pulses 1130 perceived by the brain by mixing these two different waveforms together can induce brainwave entrainment.

[00310] In some embodiments, the NSS 905 can generate binaural beats using a pitch panning technique. For example, the audio generation module 910 or audio adjustment module 915 can include or use a filter to modulate the pitch of a sound file or single tone up and down, and at the same time pan the modulation between stereo sides, such that one side will have a slightly higher pitch while the other side has a pitch that is slightly lower. The stereo sides can refer to the first audio source that generates and provides the audio signal to the first ear of the subject, and the second audio source that generates and provides the audio signal to the second ear of the subject. A sound file can refer to a file format configured to store a representation of, or information about, an acoustic wave. Example sound file formats can include .mp3, .wav, .aac, m4a, .smf, etc. [00311] The NSS 905 can use this pitch panning technique to generate a type of spatial positioning that, when listened to through stereo headphones, is perceived by the brain in a manner similar to binaural beats. The NSS 905 can, therefore, use this pitch panning technique to generate pulses or beats using a single tone or a single sound file.

[00312] In some cases, the NSS 905 can generate monaural beats or monaural pulses. Monaural beats or pulses are similar to binaural beats in that they are also generated by combining two tones to form a beat. The NSS 905 or component of system 100 can form monaural beats by combining the two tones using a digital or analog technique before the sound reaches the ears, as opposed to the brain combining the waveforms as in binaural beats. For example, the NSS 905 (or audio generation component 910) can identify and select two different waveforms that, when combined, produce beats or pulses having a desired pulse rate interval. The NSS 905 can identify a first digital representation of a first acoustic waveform, and identify a second digital representation of a second acoustic waveform have a different parameter than the first acoustic waveform. The NSS 905 can combine the first and second digital waveforms to generate a third digital waveform different from the first digital waveform and the second digital waveform. The NSS 905 can then transmit the third digital waveform in a digital form to the audio signaling component 950. The NSS 905 can translate the digital waveform to an analog format and transmit the analog format to the audio signaling component 950. The audio signaling component 950 can then, via an audio source, generate the sound to be perceived by one or both ears. The same sound can be perceived by both ears. The sound can include the pulses or beats spaced at the desired pulse rate interval 1040. [00313] FIG. 11B illustrates acoustic pulses having isochronic tones, in accordance with an embodiment. Isochronic tones are evenly spaced tone pulses. Isochronic tones can be created without having to combine two different tones. The NSS 905 or other component of system 100 can create the isochronic tone by turning a tone on and off. The NSS 905 can generate the isochronic tones or pulses by instructing the audio signaling component to turn on and off. The NSS 905 can modify a digital representation of an acoustic wave to remove or set digital values of the acoustic wave such that sound is generated during the pulses 1135 and no sound is generated during the null portions 1140.

[00314] By turning on and off the acoustic wave, the NSS 905 can establish acoustic pulses 1135 that are spaced apart by a pulse rate interval 1040 that corresponds to a desired stimulation frequency, such as 40 Hz. The isochronic pulses spaced part at the desired PRI 1040 can induce brainwave entrainment. [00315] FIG. llC illustrates audio pulses generated by the NSS 905 using a sound track, in accordance with an embodiment. A sound track can include or refer to a complex acoustical wave that includes multiple different frequencies, amplitudes, or tones. For example, a sound track can include a voice track, a musical instrument track, a musical track having both voice and musical instruments, nature sounds, or white noise.

[00316] The NSS 905 can modulate the sound track to induce brainwave entrainment by rhythmically adjusting a component in the sound. For example, the NSS 905 can modulate the volume by increasing and decreasing the amplitude of the acoustic wave or sound track to create the rhythmic stimulus corresponding to the stimulation frequency for inducing brainwave entrainment. Thus, the NSS 905 can embed, into a sound track acoustic pulses having a pulse rate interval corresponding to the desired stimulation frequency to induce brainwave entrainment. The NSS 905 can manipulate the sound track to generate a new, modified sound track having acoustic pulses with a pulse rate interval corresponding to the desired stimulation frequency to induce brainwave entrainment.

[00317] As illustrated in FIG. 11C, pulses 1135 are generated by modulating the volume from a first level Va to a second level Vb. During portions 1140 of the acoustic wave 345, the NSS 905 can set or keep the volume at Va. The volume Va can refer to an amplitude of the wave, or a maximum amplitude or crest of the wave 345 during the portion 1140. The NSS 905 can then adjust, change, or increase the volume to Vb during portion 1135. The NSS 905 can increase the volume by a predetermined amount, such as a percentage, a number of decibels, a subject-specified amount, or other amount. The NSS 905 can set or maintain the volume at Vb for a duration corresponding to a desired pulse length for the pulse 1135. [00318] In some embodiments, the NSS 905 can include an attenuator to attenuate the volume from level Vb to level Va. In some embodiments, the NSS 905 can instruct an attenuator ( e.g ., an attenuator of audio signaling component 950) to attenuate the volume from level Vb to level Va. In some embodiments, the NSS 905 can include an amplifier to amplify or increase the volume from Va to Vb. In some embodiments, the NSS 905 can instruct an amplifier (e.g., an amplifier of the audio signaling component 950) to amplify or increase the volume from Va to Vb.

[00319] Referring back to FIG. 9, the NSS 905 can include, access, interface with, or otherwise communicate with at least one audio adjustment module 915. The audio adjustment module 915 can be designed and constructed to adjust a parameter associated with the audio signal, such as a frequency, amplitude, wavelength, pattern or other parameter of the audio signal. The audio adjustment module 915 can automatically vary a parameter of the audio signal based on profile information or feedback. The audio adjustment module 915 can receive the feedback information from the feedback monitor 935. The audio adjustment module 915 can receive instructions or information from a side effects management module 930. The audio adjustment module 915 can receive profile information from profile manager 925.

[00320] The NSS 905 can include, access, interface with, or otherwise communicate with at least one unwanted frequency filtering module 920. The unwanted frequency filtering module 920 can be designed and constructed to block, mitigate, reduce, or otherwise filter out frequencies of audio signals that are undesired to prevent or reduce an amount of such audio signals from being perceived by the brain. The unwanted frequency filtering module 920 can interface, instruct, control, or otherwise communicate with a filtering component 955 to cause the filtering component 955 to block, attenuate, or otherwise reduce the effect of the unwanted frequency on the neural oscillations.

[00321] The unwanted frequency filtering module 920 can include an active noise control component ( e.g ., active noise cancellation component 1215 depicted in FIG. 12B). Active noise control can be referred to or include active noise cancellation or active noise reduction. Active noise control can reduce an unwanted sound by adding a second sound having a parameter specifically selected to cancel or attenuate the first sound. In some cases, the active noise control component can emit a sound wave with the same amplitude but with an inverted phase (or antiphase) to the original unwanted sound. The two waves can combine to form a new wave, and effectively cancel each other out by destructive interference.

[00322] The active noise control component can include analog circuits or digital signal processing. The active noise control component can include adaptive techniques to analyze waveforms of the background aural or nonaural noise. Responsive to the background noise, the active noise control component can generate an audio signal that can either phase shift or invert the polarity of the original signal. This inverted signal can be amplified by a transducer or speaker to create a sound wave directly proportional to the amplitude of the original waveform, creating destructive interference. This can reduce the volume of the perceivable noise.

[00323] In some embodiments, a noise-cancellation speaker can be co-located with a sound source speaker. In some embodiments, a noise cancellation speaker can be co-located with a sound source that is to be attenuated.

[00324] The unwanted frequency filtering module 920 can filter out unwanted frequencies that can adversely impact auditory brainwave entrainment. For example, an active noise control component can identify that audio signals include acoustic bursts having the desired pulse rate interval, as well as acoustic bursts having an unwanted pulse rate interval. The active noise control component can identify the waveforms corresponding to the acoustic bursts having the unwanted pulse rate interval, and generate an inverted phase waveform to cancel out or attenuate the unwanted acoustic bursts.

[00325] The NSS 905 can include, access, interface with, or otherwise communicate with at least one profile manager 925. The profile manager 925 can be designed or constructed to store, update, retrieve or otherwise manage information associated with one or more subjects associated with the auditory brain entrainment. Profile information can include, for example, historical treatment information, historical brain entrainment information, dosing information, parameters of acoustic waves, feedback, physiological information, environmental information, or other data associated with the systems and methods of brain entrainment.

[00326] The NSS 905 can include, access, interface with, or otherwise communicate with at least one side effects management module 930. The side effects management module 930 can be designed and constructed to provide information to the audio adjustment module 915 or the audio generation module 910 to change one or more parameter of the audio signal in order to reduce a side effect. Side effects can include, for example, nausea, migraines, fatigue, seizures, ear strain, deafness, ringing, or tinnitus.

[00327] The side effects management module 930 can automatically instruct a component of the NSS 905 to alter or change a parameter of the audio signal. The side effects management module 930 can be configured with predetermined thresholds to reduce side effects. For example, the side effects management module 930 can be configured with a maximum duration of a pulse train, maximum amplitude of acoustic waves, maximum volume, maximum duty cycle of a pulse train ( e.g ., the pulse width multiplied by the frequency of the pulse train), maximum number of treatments for brainwave entrainment in a time period (e.g., 1 hour, 2 hours, 12 hours, or 24 hours).

[00328] The side effects management module 930 can cause a change in the parameter of the audio signal in response to feedback information. The side effect management module 930 can receive feedback from the feedback monitor 935. The side effects management module 930 can determine to adjust a parameter of the audio signal based on the feedback. The side effects management module 930 can compare the feedback with a threshold to determine to adjust the parameter of the audio signal. [00329] The side effects management module 930 can be configured with or include a policy engine that applies a policy or a rule to the current audio signal and feedback to determine an adjustment to the audio signal. For example, if feedback indicates that a patient receiving audio signals has a heart rate or pulse rate above a threshold, the side effects management module 930 can turn off the pulse train until the pulse rate stabilizes to a value below the threshold, or below a second threshold that is lower than the threshold.

[00330] The NSS 905 can include, access, interface with, or otherwise communicate with at least one feedback monitor 935. The feedback monitor can be designed and constructed to receive feedback information from a feedback component 960. Feedback component 960 can include, for example, a feedback sensor 1405 such as a temperature sensor, heart or pulse rate monitor, physiological sensor, ambient noise sensor, microphone, ambient temperature sensor, blood pressure monitor, brain wave sensor, EEG probe, electrooculography (“EOG”) probes configured measure the comeo-retinal standing potential that exists between the front and the back of the human eye, accelerometer, gyroscope, motion detector, proximity sensor, camera, microphone, or photo detector.

Systems and Devices Configured for Neural Stimulation via Auditory Stimulation [00331] FIG. 12A illustrates a system for auditory brain entrainment in accordance with an embodiment. The system 1200 can include one or more speakers 1205. The system 1200 can include one or more microphones. In some embodiments, the system can include both speakers 1205 and microphones 1210. In some embodiments, the system 1200 includes speakers 1205 and may not include microphones 1210. In some embodiments, the system 1200 includes microphones 1210 and may not include speakers 1210.

[00332] The speakers 1205 can be integrated with the audio signaling component 950. The audio signaling component 950 can include speakers 1205. The speakers 1205 can interact or communicate with audio signaling component 950. For example, the audio signaling component 950 can instruct the speaker 1205 to generate sound.

[00333] The microphones 1210 can be integrated with the feedback component 960. The feedback component 960 can include microphones 1210. The microphones 1210 can interact or communicate with feedback component 960. For example, the feedback component 960 can receive information, data or signals from microphone 1210.

[00334] In some embodiments, the speaker 1205 and the microphone 1210 can be integrated together or a same device. For example, the speaker 1205 can be configured to function as the microphone 1210. The NSS 905 can toggle the speaker 1205 from a speaker mode to a microphone mode.

[00335] In some embodiments, the system 1200 can include a single speaker 1205 positioned at one of the ears of the subject. In some embodiments, the system 1200 can include two speakers. A first speaker of the two speakers can be positioned at a first ear, and the second speaker of the two speakers can be positioned at the second ear. In some embodiments, additional speakers can be positioned in front of the subject’s head, or behind the subject’s head. In some embodiments, one or more microphones 1210 can be positioned at one or both ears, in front of the subject’s head, or behind the subject’s head.

[00336] The speaker 1205 can include a dynamic cone speaker configured to produce sound from an electrical signal. The speaker 1205 can include a full-range driver to produce acoustic waves with frequencies over some or all of the audible range ( e.g ., 60 Hz to 20,000 Hz). The speaker 1205 can include a driver to produce acoustic waves with frequencies outside the audible range, such as 0 to 60 Hz, or in the ultrasonic range such as 20 kHz to 4 GHz. The speaker 1205 can include one or more transducers or drivers to produce sounds at varying portions of the audible frequency range. For example, the speaker 1205 can include tweeters for high range frequencies (e.g., 2,000 Hz to 20,000 Hz), mid-range drivers for middle frequencies (e.g, 250 Hz to 2000 Hz), or woofers for low frequencies (e.g, 60 Hz to 250 Hz).

[00337] The speaker 1205 can include one or more types of speaker hardware, components or technology to produce sound. For example, the speaker 1205 can include a diaphragm to produce sound. The speaker 1205 can include a moving-iron loudspeaker that uses a stationary coil to vibrate a magnetized piece of metal. The speaker 1205 can include a piezoelectric speaker. A piezoelectric speaker can use the piezoelectric effect to generate sound by applying a voltage to a piezoelectric material to generate motion, which is converted into audible sound using diaphragms and resonators.

[00338] The speaker 1205 can include various other types of hardware or technology, such as magnetostatic loudspeakers, magnetostrictive speakers, electrostatic loudspeakers, a ribbon speaker, planar magnetic loudspeakers, bending wave loudspeakers, coaxial drivers, horn loudspeakers, Heil air motion transducers, or transparent ionic conductions speaker. [00339] In some cases, the speaker 1205 may not include a diaphragm. For example, the speaker 1205 can be a plasma arc speaker that uses electrical plasma as a radiating element. The speaker 1205 can be a thermoacoustic speakers that uses carbon nanotube thin film. The speaker 1205 can be a rotary woofer that includes a fan with blades that constantly change their pitch.

[00340] In some embodiments, the speaker 1205 can include a headphone or a pair of headphones, earspeakers, earphones, or earbuds. Headphones can be relatively small speakers as compared to loudspeakers. Headphones can be designed and constructed to be placed in the ear, around the ear, or otherwise at or near the ear. Headphones can include electroacoustic transducers that convert an electrical signal to a corresponding sound in the subject’s ear. In some embodiments, the headphones 1205 can include or interface with a headphone amplifier, such as an integrated amplifier or a standalone unit.

[00341] In some embodiments, the speaker 1205 can include headphones that can include an air jet that pushes air into the auditory canal, pushing the tympanum in a manner similar to that of a sound wave. The compression and rarefaction of the tympanic membrane through bursts of air (with or without any discernible sound) can control frequencies of neural oscillations similar to auditory signals. For example, the speaker 1205 can include air jets or a device that resembles in-ear headphones that either push, pull or both push and pull air into and out of the ear canal in order to compress or pull the tympanic membrane to affect the frequencies of neural oscillations. The NSS 905 can instruct, configure or cause the air jets to generate bursts of air at a predetermined frequency.

[00342] In some embodiments, the headphones can connect to the audio signaling component 950 via a wired or wireless connection. In some embodiments, the audio signaling component 950 can include the headphones. In some embodiments, the headphones 1205 can interface with one or more components of the NSS 905 via a wired or wireless connection. In some embodiments, the headphones 1205 can include one or more components of the NSS 905 or system 100, such as the audio generation module 910, audio adjustment module 915, unwanted frequency filtering module 920, profile manager 925, side effects management module 930, feedback monitor 935, audio signaling component 950, filtering component 955, or feedback component 960.

[00343] The speaker 1205 can include or be integrated into various types of headphones. For example, the headphones can include, for example, circumaural headphones ( e.g ., full size headphones) that include circular or ellipsoid earpads that are designed and constructed to seal against the head to attenuate external noise. Circumaural headphones can facilitate providing an immersive auditory brainwave wave stimulation experience, while reducing external distractions. In some embodiments, headphones can include supra-aural headphones, which include pads that press against the ears rather than around them. Supra-aural headphones may provide less attenuation of external noise.

[00344] Both circumaural headphones and supra-aural headphones can have an open back, closed back, or semi open back. An open back leaks more sound and allows more ambient sounds to enter, but provides a more natural or speaker-like sound. Closed back headphones block more of the ambient noise as compared to open back headphones, thus providing a more immersive auditory brainwave stimulation experience while reducing external distractions.

[00345] In some embodiments, headphones can include ear-fitting headphones, such as earphones or in-ear headphones. Earphones (or earbuds) can refer to small headphones that are fitted directly in the outer ear, facing but not inserted in the ear canal. Earphones, however, provide minimal acoustic isolation and allow ambient noise to enter. In-ear headphones (or in-ear monitors or canalphones) can refer to small headphones that can be designed and constructed for insertion into the ear canal. In-ear headphones engage the ear canal and can block out more ambient noise as compared to earphones, thus providing a more immersive auditory brainwave stimulation experience. In-ear headphones can include ear canal plugs made or formed from one or more material, such as silicone rubber, elastomer, or foam. In some embodiments, in-ear headphones can include custom-made castings of the ear canal to create custom-molded plugs that provide added comfort and noise isolation to the subject, thereby further improving the immersiveness of the auditory brainwave stimulation experience.

[00346] In some embodiments, one or more microphones 1210 can be used to detect sound. A microphone 1210 can be integrated with a speaker 1205. The microphone 1210 can provide feedback information to the NSS 905 or other component of system 100. The microphone 1210 can provide feedback to a component of the speaker 1205 to cause the speaker 1205 to adjust a parameter of audio signal.

[00347] The microphone 1210 can include a transducer that converts sound into an electrical signal. The Microphone 1210 can use electromagnetic induction, capacitance change, or piezoelectricity to produce the electrical signal from air pressure variations. In some cases, the microphone 1210 can include or be connected to a pre-amplifier to amplify the signal before it is recorded or processed. The microphone 1210 can include one or more type of microphone, including, for example, a condenser microphone, RF condenser microphone, electret condenser, dynamic microphone, moving-coil microphone, ribbon microphone, carbon microphone, piezoelectric microphone, crystal microphone, fiber optic microphone, laser microphone, liquid or water microphone, microelectromechanical systems (“MEMS”) microphone, or speakers as microphones.

[00348] The feedback component 960 can include or interface with the microphone 1210 to obtain, identify, or receive sound. The feedback component 960 can obtain ambient noise. The feedback component 960 can obtain sound from the speakers 1205 to facilitate the NS S 905 adjusting a characteristic of the audio signal generated by the speaker 1205. The microphone 1210 can receive voice input from the subject, such as audio commands, instructions, requests, feedback information, or responses to survey questions.

[00349] In some embodiments, one or more speakers 1205 can be integrated with one or more microphones 1210. For example, the speaker 1205 and microphone 1210 can form a headset, be placed in a single enclosure, or may even be the same device since the speaker 1205 and the microphone 1210 may be structurally designed to toggle between a sound generation mode and a sound reception mode.

[00350] FIG. 12B illustrates a system configuration for auditory brain entrainment in accordance with an embodiment. The system 1200 can include at least one speaker 1205. The system 1200 can include at least microphone 1210. The system 1200 can include at least one active noise cancellation component 1215. The system 1200 can include at least one feedback sensor 1225. The system 1200 can include or interface with the NS S 905. The system 1200 can include or interface with an audio player 1220.

[00351] The system 1200 can include a first speaker 1205 positioned at a first ear. The system 1200 can include a second speaker 1205 positioned at a second year. The system 1200 can include a first active noise cancellation component 1215 communicatively coupled with the first microphone 1210. The system 1200 can include a second active noise cancellation component 1215 communicatively coupled with the second microphone 1210. In some cases, the active noise cancellation component 1215 can communicate with both the first speaker 1205 and the second speaker 1205, or both the first microphone 1210 and the second microphone 1210. The system 1200 can include a first microphone 1210 communicatively coupled with the active noise cancellation component 1215. The system 1200 can include a second microphone 1210 communicatively coupled with the active noise cancelation component 1215. In some embodiments, each of the microphone 1210, speaker 1205 and active noise cancellation component can communicate or interface with the NSS 905. In some embodiments, the system 1200 can include a feedback sensor 1225 and a second feedback sensor 1225 communicatively coupled to the NSS 905, the speaker 1205, microphone 1210, or active noise cancellation component 1215. [00352] In operation, and in some embodiments, the audio player 1220 can play a musical track. The audio player 1220 can provide the audio signal corresponding to the musical track via a wired or wireless connection to the first and second speakers 1205. In some embodiments, the NSS 905 can intercept the audio signal from the audio player. For example, the NSS 905 can receive the digital or analog audio signal from the audio player 1220. The NSS 905 can be intermediary to the audio player 1220 and a speaker 1205. The NSS 905 can analyze the audio signal corresponding to the music in order to embed an auditory brainwave stimulation signal. For example, the NSS 905 can adjust the volume of the auditory signal from the audio player 1220 to generate acoustic pulses having a pulse rate interval as depicted in FIG. 11C. In some embodiments, the NSS 905 can use a binaural beats technique to provide different auditory signals to the first and second speakers that, when perceived by the brain, is combined to have the desired stimulation frequency.

[00353] In some embodiments, the NSS 905 can adjust for any latency between first and second speakers 1205 such that the brain perceives the audio signals at the same or substantially same time ( e.g ., within 1 milliseconds, 2 milliseconds, 5 milliseconds, or 10 milliseconds). The NSS 905 can buffer the audio signals to account for latency such that audio signals are transmitted from the speakers at the same time.

[00354] In some embodiments, the NSS 905 may not be intermediary to the audio player 1220 and the speaker. For example, the NSS 905 can receive the musical track from a digital music repository. The NSS 905 can manipulate or modify the musical track to embed acoustic pulses in accordance with the desired PRI. The NSS 905 can then provide the modified musical track to the audio player 1220 to provide the modified audio signal to the speaker 1205.

[00355] In some embodiments, an active noise cancellation component 1215 can receive ambient noise information from the microphone 1210, identify unwanted frequencies or noise, and generate an inverted phase waveform to cancel out or attenuate the unwanted waveforms. In some embodiments, the system 1200 can include an additional speaker that generates the noise canceling waveform provided by the noise cancellation component 1215. The noise cancellation component 1215 can include the additional speaker.

[00356] The feedback sensor 1225 of the system 1200 can detect feedback information, such as environmental parameters or physiological conditions. The feedback sensor 1225 can provide the feedback information to NSS 905. The NSS 905 can adjust or change the audio signal based on the feedback information. For example, the NSS 905 can determine that a pulse rate of the subject exceeds a predetermined threshold, and then lower the volume of the audio signal. The NSS 905 can detect that the volume of the auditory signal exceeds a threshold, and decrease the amplitude. The NSS 905 can determine that the pulse rate interval is below a threshold, which can indicate that a subject is losing focus or not paying a satisfactory level of attention to the audio signal, and the NSS 905 can increase the amplitude of the audio signal or change the tone or music track. In some embodiments, the NSS 905 can vary the tone or the music track based on a time interval. Varying the tone or the music track can cause the subject to pay a greater level of attention to the auditory stimulation, which can facilitate brainwave entrainment.

[00357] In some embodiments, the NSS 905 can receive neural oscillation information from EEG probes 1225, and adjust the auditory stimulation based on the EEG information. For example, the NSS 905 can determine, from the probe information, that neurons are oscillating at an undesired frequency. The NSS 905 can then identify the corresponding undesired frequency in ambient noise using the microphone 1210. The NSS 905 can then instruct the active noise cancellation component 1215 to cancel out the waveforms corresponding to the ambient noise having the undesired frequency.

[00358] In some embodiments, the NSS 905 can enable a passive noise filter. A pass noise filter can include a circuit having one or more or a resistor, capacitor or an inductor that filters out undesired frequencies of noise. In some cases, a passive filter can include a sound insulating material, sound proofing material, or sound absorbing material.

[00359] FIG. 4C illustrates a system configuration for auditory brain entrainment in accordance with an embodiment. The system 401 can provide auditory brainwave stimulation using ambient noise source 1230. For example, system 401 can include the microphone 1210 that detects the ambient noise 1230. The microphone 1210 can provide the detected ambient noise to NSS 905. The NSS 905 can modify the ambient noise 1230 before providing it to the first speaker 1205 or the second speaker 1205. In some embodiments, the system 401 can be integrated or interface with a hearing aid device. A hearing aid can be a device designed to improve hearing.

[00360] The NSS 905 can increase or decrease the amplitude of the ambient noise 1230 to generate acoustic bursts having the desired pulse rate interval. The NSS 905 can provide the modified audio signals to the first and second speakers 1205 to facilitate auditory brainwave entrainment.

[00361] In some embodiments, the NSS 905 can overlay a click train, tones, or other acoustic pulses over the ambient noise 1230. For example, the NSS 905 can receive the ambient noise information from the microphone 1210, apply an auditory stimulation signal to the ambient noise information, and then present the combined ambient noise information and auditory stimulation signal to the first and second speakers 1205. In some cases, the NS S 905 can filter out unwanted frequencies in the ambient noise 1230 prior to providing the auditory stimulation signal to the speakers 1205.

[00362] Thus, using the ambient noise 1230 as part of the auditory stimulation, a subject can observe the surroundings or carry on with their daily activities while receiving auditory stimulation to facilitate brainwave entrainment.

[00363] FIG. 13 illustrates a system configuration for auditory brain entrainment in accordance with an embodiment. The system 1300 can provide auditory stimulation for brainwave entrainment using a room environment. The system 1300 can include one or more speakers. The system 1300 can include a surround sound system. For example, the system 1300 includes a left speaker 1310, right speaker 1315, center speaker 1305, right surround speaker 1325, and left surround speaker 1330. System 1300 an include a sub-woofer 1320. The system 1300 can include the microphone 1210. The system 1300 can include or refer to a 5.1 surround system. In some embodiments, the system 1300 can have 1, 2, 3, 4, 5, 6, 7 or more speakers.

[00364] When providing auditory stimulation using a surround system, the NS S 905 can provide the same or different audio signals to each of the speakers in the system 1300. The NSS 905 can modify or adjust audio signals provided to one or more of the speakers in system 1300 in order to facilitate brainwave entrainment. For example, the NSS 905 can receive feedback from microphone 1210 and modify, manipulate or otherwise adjust the audio signal to optimize the auditory stimulation provided to a subject located at a position in the room that corresponds to the location of the microphone 1210. The NSS 905 can optimize or improve the auditory stimulation perceived at the location corresponding to microphone 1210 by analyzing the acoustic beams or waves generated by the speakers that propagate towards the microphone 1210.

[00365] The NSS 905 can be configured with information about the design and construction of each speaker. For example, speaker 1305 can generate sound in a direction that has an angle of 1335; speaker 1310 can generate sound that travels in a direction having an angle of 1340; speaker 1315 can generate sound that travels in a direction having an angle of 1345; speaker 1325 can generate sound that travels in a direction having an angle of 1355; and speaker 1330 can generate sound that travels in a direction having an angle of 1350. These angles can be the optimal or predetermined angles for each of the speakers. These angles can refer to the optimal angle of each speaker such that a person positioned at location corresponding to microphone 1210 can receive the optimum auditory stimulation. Thus, the speakers in system 1300 can be oriented to transmit auditory stimulation towards the subject. [00366] In some embodiments, the NSS 905 can enable or disable one or more speakers. In some embodiments, the NSS 905 can increase or decrease the volume of the speakers to facilitate brainwave entrainment. The NSS 905 can intercept musical tracks, television audio, movie audio, internet audio, audio output from a set top box, or other audio source. The NSS 905 can adjust or manipulate the received audio, and transmit the adjusted audio signals to the speakers in system 1300 to induce brainwave entrainment.

[00367] FIG. 14 illustrates feedback sensors 1405 placed or positioned at, on, or near a person’s head. Feedback sensors 1405 can include, for example, EEG probes that detect brain wave activity.

[00368] The feedback monitor 935 can detect, receive, obtain, or otherwise identify feedback information from the one or more feedback sensors 1405. The feedback monitor 935 can provide the feedback information to one or more component of the NSS 905 for further processing or storage. For example, the profile manager 925 can update profile data structure 945 stored in data repository 940 with the feedback information. Profile manager 925 can associate the feedback information with an identifier of the patient or person undergoing the auditory brain stimulation, as well as a time stamp and date stamp corresponding to receipt or detection of the feedback information.

[00369] The feedback monitor 935 can determine a level of attention. The level of attention can refer to the focus provided to the acoustic pulses used for brain stimulation. The feedback monitor 935 can determine the level of attention using various hardware and software techniques. The feedback monitor 935 can assign a score to the level of attention e.g ., 1 to 10 with 1 being low attention and 10 being high attention, or vice versa, 1 to 100 with 1 being low attention and 100 being high attention, or vice versa, 0 to 1 with 0 being low attention and 1 being high attention, or vice versa), categorize the level of attention (e.g., low, medium, high), grade the attention (e.g, A, B, C, D, or F), or otherwise provide an indication of a level of attention.

[00370] In some cases, the feedback monitor 935 can track a person’s eye movement to identify a level of attention. The feedback monitor 935 can interface with a feedback component 960 that includes an eye-tracker. The feedback monitor 935 (e.g, via feedback component 960) can detect and record eye movement of the person and analyze the recorded eye movement to determine an attention span or level of attention. The feedback monitor 935 can measure eye gaze which can indicate or provide information related to covert attention. For example, the feedback monitor 935 ( e.g ., via feedback component 960) can be configured with electro-oculography (“EOG”) to measure the skin electric potential around the eye, which can indicate a direction the eye faces relative to the head. In some embodiments, the EOG can include a system or device to stabilize the head so it cannot move in order to determine the direction of the eye relative to the head. In some embodiments, the EOG can include or interface with a head tracker system to determine the position of the heads, and then determine the direction of the eye relative to the head.

[00371] In some embodiments, the feedback monitor 935 and feedback component 960 can determine a level of attention the subject is paying to the auditory stimulation based on eye movement. For example, increased eye movement may indicate that the subject is focusing on visual stimuli, as opposed to the auditory stimulation. To determine the level of attention the subject is paying to visual stimuli as opposed to the auditory stimulation, the feedback monitor 935 and feedback component 960 can determine or track the direction of the eye or eye movement using video detection of the pupil or corneal reflection. For example, the feedback component 960 can include one or more camera or video camera. The feedback component 960 can include an infra-red source that sends light pulses towards the eyes. The light can be reflected by the eye. The feedback component 960 can detect the position of the reflection. The feedback component 960 can capture or record the position of the reflection. The feedback component 960 can perform image processing on the reflection to determine or compute the direction of the eye or gaze direction of the eye.

[00372] The feedback monitor 935 can compare the eye direction or movement to historical eye direction or movement of the same person, nominal eye movement, or other historical eye movement information to determine a level of attention. For example, the feedback monitor 935 can determine a historical amount of eye movement during historical auditory stimulation sessions. The feedback monitor 935 can compare the current eye movement with the historical eye movement to identify a deviation. The NSS 905 can determine, based on the comparison, an increase in eye movement and further determine that the subject is paying less attention to the current auditory stimulation based on the increase in eye movement. In response to detecting the decrease in attention, the feedback monitor 935 can instruct the audio adjustment module 915 to change a parameter of the audio signal to capture the subject’s attention. The audio adjustment module 915 can change the volume, tone, pitch, or music track to capture the subject’s attention or increase the level of attention the subject is paying to the auditory stimulation. Upon changing the audio signal, the NSS 905 can continue to monitor the level of attention. For example, upon changing the audio signal, the NS S 905 can detect a decrease in eye movement which can indicate an increase in a level of attention provided to the audio signal.

[00373] The feedback sensor 1405 can interact with or communicate with NS S 905. For example, the feedback sensor 1405 can provide detected feedback information or data to the NSS 905 ( e.g ., feedback monitor 935). The feedback sensor 1405 can provide data to the NSS 905 in real-time, for example as the feedback sensor 1405 detects or senses or information. The feedback sensor 1405 can provide the feedback information to the NSS 905 based on a time interval, such as 1 minute, 2 minutes, 5 minutes, 10 minutes, hourly, 2 hours, 4 hours, 12 hours, or 24 hours. The feedback sensor 1405 can provide the feedback information to the NSS 905 responsive to a condition or event, such as a feedback measurement exceeding a threshold or falling below a threshold. The feedback sensor 1405 can provide feedback information responsive to a change in a feedback parameter. In some embodiments, the NSS 905 can ping, query, or send a request to the feedback sensor 1405 for information, and the feedback sensor 1405 can provide the feedback information in response to the ping, request, or query.

Method for Neural Stimulation via Auditory Stimulation [00374] FIG. 15 is a flow diagram of a method of performing auditory brain entrainment in accordance with an embodiment. The method 800 can be performed by one or more system, component, module or element depicted in FIGS. 7A, 7B, and 9-14, including, for example, a neural stimulation system (NSS). In brief overview, the NSS can identify an audio signal to provide at block 1505. At block 1510, the NSS can generate and transmit the identified audio signal. At 1515 the NSS can receive or determine feedback associated with neural activity, physiological activity, environmental parameters, or device parameters. At 1520 the NSS can manage, control, or adjust the audio signal based on the feedback.

NSS Operating With Headphones

[00375] The NSS 905 can operate in conjunction with the speakers 1205 as depicted in FIG. 12A. The NSS 905 can operate in conjunction with earphones or in-ear phones including the speaker 1205 and a feedback sensor 1405.

[00376] In operation, a subject using the headphones can wear the headphones on their head such that speakers or placed at or in the ear canals. In some cases, the subject can provide an indication to the NSS 905 that the headphones have been worn and that the subject is ready to undergo brainwave entrainment. The indication can include an instruction, command, selection, input, or other indication via an input/output interface, such as a keyboard 726, pointing device 727, or other I/O devices 730a-n. The indication can be a motion-based indication, visual indication, or voice-based indication. For example, the subject can provide a voice command that indicates that the subject is ready to undergo brainwave entrainment. [00377] In some cases, the feedback sensor 1405 can determine that the subject is ready to undergo brainwave entrainment. The feedback sensor 1405 can detect that the headphones have been placed on a subject’s head. The NSS 905 can receive motion data, acceleration data, gyroscope data, temperature data, or capacitive touch data to determine that the headphones have been placed on the subject’s head. The received data, such as motion data, can indicate that the headphones were picked up and placed on the subject’s head. The temperature data can measure the temperature of or proximate to the headphones, which can indicate that the headphones are on the subj ect’ s head. The NS S 905 can detect that the subj ect is ready responsive to determining that the subject is paying a high level of attention to the headphones or feedback sensor 1405.

[00378] Thus, the NSS 905 can detect or determine that the headphones have been worn and that the subject is in a ready state, or the NSS 905 can receive an indication or confirmation from the subject that the subject has worn the headphones and the subject is ready to undergo brainwave entrainment. Upon determining that the subject is ready, the NSS 905 can initialize the brainwave entrainment process. In some embodiments, the NSS 905 can access a profile data structure 945. For example, a profile manager 925 can query the profile data structure 945 to determine one or more parameter for the external auditory stimulation used for the brain entrainment process. Parameters can include, for example, a type of audio stimulation technique, an intensity or volume of the audio stimulation, frequency of the audio stimulation, duration of the audio stimulation, or wavelength of the audio stimulation. The profile manager 925 can query the profile data structure 945 to obtain historical brain entrainment information, such as prior auditory stimulation sessions. The profile manager 925 can perform a lookup in the profile data structure 945. The profile manager 925 can perform a look-up with a username, user identifier, location information, fingerprint, biometric identifier, retina scan, voice recognition and authentication, or other identifying technique.

[00379] The NSS 905 can determine a type of external auditory stimulation based on the components connected to the headphones. The NSS 905 can determine the type of external auditory stimulation based on the type of speakers 1205 available. For example, if the headphones are connected to an audio player, the NSS 905 can determined to embed acoustic pulses. If the headphones are not connected to an audio player, but only the microphone, the NSS 905 can determine to inject a pure tone or modify ambient noise.

[00380] In some embodiments, the NSS 905 can determine the type of external auditory stimulation based on historical brainwave entrainment sessions. For example, the profile data structure 945 can be pre-configured with information about the type of audio signaling component 950.

[00381] The NSS 905 can determine, via the profile manager 925, a modulation frequency for the pulse train or the audio signal. For example, NSS 905 can determine, from the profile data structure 945, that the modulation frequency for the external auditory stimulation should be set to 40 Hz. Depending on the type of auditory stimulation, the profile data structure 945 can further indicate a pulse length, intensity, wavelength of the acoustic wave forming the audio signal, or duration of the pulse train.

[00382] In some cases, the NSS 905 can determine or adjust one or more parameter of the external auditory stimulation. For example, the NSS 905 ( e.g ., via feedback component 960 or feedback sensor 1405) can determine an amplitude of the acoustic wave or volume level for the sound. The NSS 905 (e.g., via audio adjustment module 915 or side effects management module 930) can establish, initialize, set, or adjust the amplitude or wavelength of the acoustic waves or acoustic pulses. For example, the NSS 905 can determine that there is a low level of ambient noise. Due to the low level of ambient noise, subject’s hearing may not be impaired or distracted. The NSS 905 can determine, based on detecting a low level of ambient noise, that it may not be necessary to increase the volume, or that it may be possible to reduce the volume to maintain the efficacy of brainwave entrainment.

[00383] In some embodiments, the NSS 905 can monitor (e.g., via feedback monitor 935 and feedback component 960) the level of ambient noise throughout the brainwave entrainment process to automatically and periodically adjust the amplitude of the acoustic pulses. For example, if the subject began the brainwave entrainment process when there was a high level of ambient noise, the NSS 905 can initially set a higher amplitude for the acoustic pulses and use a tone that includes frequencies that are easier to perceive, such as 10 kHz. However, in some embodiments in which the ambient noise level decreases throughout the brainwave entrainment process, the NSS 905 can automatically detect the decrease in ambient noise and, in response to the detection, adjust or lower the volume while decreasing the frequency of the acoustic wave. The NSS 905 can adjust the acoustic pulses to provide a high contrast ratio with respect to ambient noise to facilitate brainwave entrainment. [00384] In some embodiments, the NSS 905 ( e.g ., via feedback monitor 935 and feedback component 960) can monitor or measure physiological conditions to set or adjust a parameter of the acoustic wave. In some embodiments, the NSS 905 can monitor or measure heart rate, pulse rate, blood pressure, body temperature, perspiration, or brain activity to set or adjust a parameter of the acoustic wave.

[00385] In some embodiments, the NSS 905 can be preconfigured to initially transmit acoustic pulses having a lowest setting for the acoustic wave intensity (e.g., low amplitude or high wavelength) and gradually increase the intensity (e.g, increase the amplitude of the or decrease the wavelength) while monitoring feedback until an optimal audio intensity is reached. An optimal audio intensity can refer to a highest intensity without adverse physiological side effects, such as deafness, seizures, heart attack, migraines, or other discomfort. The NSS 905 (e.g, via side effects management module 930) can monitor the physiological symptoms to identify the adverse side effects of the external auditory stimulation, and adjust (e.g, via audio adjustment module 915) the external auditory stimulation accordingly to reduce or eliminate the adverse side effects.

[00386] In some embodiments, the NSS 905 (e.g, via audio adjustment module 915) can adjust a parameter of the audio wave or acoustic pulse based on a level of attention. For example, during the brainwave entrainment process, the subject may get bored, lose focus, fall asleep, or otherwise not pay attention to the acoustic pulses. Not paying attention to the acoustic pulses may reduce the efficacy of the brainwave entrainment process, resulting in neurons oscillating at a frequency different from the desired modulation frequency of the acoustic pulses.

[00387] NSS 905 can detect the level of attention the subject is paying to the acoustic pulses using the feedback monitor 935 and one or more feedback component 960. Responsive to determining that the subject is not paying a satisfactory amount of attention to the acoustic pulses, the audio adjustment module 915 can change a parameter of the audio signal to gain the subject’s attention. For example, the audio adjustment module 915 can increase the amplitude of the acoustic pulse, adjust the tone of the acoustic pulse, or change the duration of the acoustic pulse. The audio adjustment module 915 can randomly vary one or more parameters of the acoustic pulse. The audio adjustment module 915 can initiate an attention seeking acoustic sequence configured to regain the subject’s attention. For example, the audio sequence can include a change in frequency, tone, amplitude, or insert words or music in a predetermined, random, or pseudo-random pattern. The attention seeking audio sequence can enable or disable different acoustic sources if the audio signaling component 950 includes multiple audio sources or speakers. Thus, the audio adjustment module 915 can interact with the feedback monitor 935 to determine a level of attention the subject is providing to the acoustic pulses, and adjust the acoustic pulses to regain the subject’s attention if the level of attention falls below a threshold.

[00388] In some embodiments, the audio adjustment module 915 can change or adjust one or more parameter of the acoustic pulse or acoustic wave at predetermined time intervals e.g ., every 5 minutes, 10 minutes, 15 minutes, or 20 minutes) to regain or maintain the subject’s attention level.

[00389] In some embodiments, the NSS 905 (e.g., via unwanted frequency filtering module 920) can filter, block, attenuate, or remove unwanted auditory external stimulation. Unwanted auditory external stimulation can include, for example, unwanted modulation frequencies, unwanted intensities, or unwanted wavelengths of sound waves. The NSS 905 can deem a modulation frequency to be unwanted if the modulation frequency of a pulse train is different or substantially different (e.g, 1%, 2%, 5%, 10%, 15%, 20%, 25%, or more than 25%) from a desired frequency.

[00390] For example, the desired modulation frequency for brainwave entrainment can be 40 Hz. However, a modulation frequency of 20 Hz or 80 Hz can reduce the beneficial effects to cognitive functioning of the brain, a cognitive state of the brain, the immune system, or inflammation that can result from brainwave entrainment at other frequencies, such as 40 Hz. Thus, the NSS 905 can filter out the acoustic pulses corresponding to the 20 Hz or 80 Hz modulation frequency.

[00391] In some embodiments, the NSS 905 can detect, via feedback component 960, that there are acoustic pulses from an ambient noise source that corresponds to an unwanted modulation frequency of 20 Hz. The NSS 905 can further determine the wavelength of the acoustic waves of the acoustic pulses corresponding to the unwanted modulation frequency. The NSS 905 can instruct the filtering component 955 to filter out the wavelength corresponding to the unwanted modulation frequency.

Neural Stimulation Via Peripheral Nerve Stimulation [00392] In some embodiments, systems and methods of the present disclosure can provide peripheral nerve stimulation to cause or induce neural oscillations. For example, haptic stimulation on the skin around sensory nerves forming part of or connected to the peripheral nervous system can cause or induce electrical activity in the sensory nerves, causing a transmission to the brain via the central nervous system, which can be perceived by the brain or can cause or induce electrical and neural activity in the brain, including activity resulting in neural oscillations. Similarly, electric currents on or through the skin around sensory nerves forming part of or connected to the peripheral nervous system can cause or induce electrical activity in the sensory nerves, causing a transmission to the brain via the central nervous system, which can be perceived by the brain or can cause or induce electrical and neural activity in the brain, including activity resulting in neural oscillations. The brain, responsive to receiving the peripheral nerve stimulations, can adjust, manage, or control the frequency of neural oscillations. The electric currents can result in depolarization of neural cells, such as due to electric current stimuli such as time-varying pulses. The electric current pulse may directly cause depolarization. Secondary effects in other regions of the brain may be gated or controlled by the brain in response to the depolarization. The peripheral nerve stimulations generated at a predetermined frequency can trigger neural activity in the brain to cause or induce neural oscillations. The frequency of neural oscillations can be based on or correspond to the frequency of the peripheral nerve stimulations, or a modulation frequency associated with the peripheral nerve stimulations. Thus, systems and methods of the present disclosure can cause or induce neural oscillations using peripheral nerve stimulations such as electric current pulses modulated at a predetermined frequency to synchronize electrical activity among groups of neurons based on the frequency of the peripheral nerve stimulations. Brain entrainment associated with neural oscillations can be observed based on the aggregate frequency of oscillations produced by the synchronous electrical activity in ensembles of cortical neurons. The frequency of the modulation of the electric currents, or pulses thereof, can cause or adjust this synchronous electrical activity in the ensembles of cortical neurons to oscillate at a frequency corresponding to the frequency of the peripheral nerve stimulation pulses.

[00393] FIG. 16A is a block diagram depicting a system to perform peripheral nerve stimulation to cause or induce neural oscillations, such as to cause brain entrainment, in accordance with an embodiment. The system 1600 can include a peripheral nerve stimulation system 1605. In brief overview, the peripheral nerve stimulation system (or peripheral nerve stimulation neural stimulation system) (“NSS”) 1605 can include, access, interface with, or otherwise communicate with one or more of a nerve stimulus generation module 1610, nerve stimulus adjustment module 1615, profile manager 1625, side effects management module 1630, feedback monitor 1635, data repository 1640, nerve stimulus generator component 1650, shielding component 1655, feedback component 1660, or nerve stimulus amplification component 1665. The nerve stimulus generation module 1610, nerve stimulus adjustment module 1615, profile manager 1625, side effects management module 1630, feedback monitor 1635, nerve stimulus generator component 1650, shielding component 1655, feedback component 1660, or nerve stimulus amplification component 1665 can each include at least one processing unit or other logic device such as programmable logic array engine, or module configured to communicate with the database repository 1650. The nerve stimulus generation module 1610, nerve stimulus adjustment module 1615, profile manager 1625, side effects management module 1630, feedback monitor 1635, nerve stimulus generator component 1650, shielding component 1655, feedback component 1660, or nerve stimulus amplification component 1665 can be separate components, a single component, or part of the NS S 1605. The system 1600 and its components, such as the NSS 1605, may include hardware elements, such as one or more processors, logic devices, or circuits. The system 1600 and its components, such as the NSS 1605, can include one or more hardware or interface component depicted in system 700 in FIGS. 7A and 7B. For example, a component of system 1600 can include or execute on one or more processors 721, access storage 728 or memory 722, and communicate via network interface 718.

Neural Stimulation Via Multiple Modes of Stimulation [00394] FIG. 16B is a block diagram depicting a system for neural stimulation via multiple modes of stimulation in accordance with an embodiment. The system 1600 can include a neural stimulation orchestration system (“NSOS”) 1605. The NSOS 1605 can provide multiple modes of stimulation. For example, the NSOS 1605 can provide a first mode of stimulation that includes visual stimulation, and a second mode of stimulation that includes auditory stimulation. For each mode of stimulation, the NSOS 1605 can provide a type of signal. For example, for the visual mode of stimulation, the NSOS 1605 can provide the following types of signals: light pulses, image patterns, flicker of ambient light, or augmented reality. NSOS 1605 can orchestrate, manage, control, or otherwise facilitate providing multiple modes of stimulation and types of stimulation.

[00395] In brief overview, the NSOS 1605 can include, access, interface with, or otherwise communicate with one or more of a stimuli orchestration component 1610, a subject assessment module 1650, a data repository 1615, one or more signaling components 1630a-n, one or more filtering components 1635a-n, one or more feedback components 1640a-n, and one or more neural stimulation systems (“NSS”) 1645a-n. The data repository 1615 can include or store a profile data structure 1620 and a policy data structure 1625. The stimuli orchestration component 1610 and subject assessment module 1650 can include at least one processing unit or other logic device such as programmable logic array engine, or module configured to communicate with the database repository 1615. The stimuli orchestration component 1610 and subject assessment module 1650 can be a single component, include separate components, or be part of the NSOS 1605. The system 1600 and its components, such as the NSOS 1605, may include hardware elements, such as one or more processors, logic devices, or circuits. The system 1600 and its components, such as the NSOS 1605, can include one or more hardware or interface component depicted in system 700 in FIGs. 7A and 7B. For example, a component of system 1600 can include or execute on one or more processors 721, access storage 728 or memory 722, and communicate via network interface 718. The system 1600 can include one or more component or functionality depicted in FIGs. 1-15, including, for example, system 100, system 900, visual NSS 105, or auditory NSS 905. For example, at least one of the signaling components 1630a-n can include one or more component or functionality of visual signaling component 150 or audio signaling component 950. At least one of the filtering components 1635a-n can include one or more component or functionality of filtering component 155 or filtering component 955. At least one of the feedback components 1640a-n can include one or more component or functionality of feedback component 160 or feedback component 960. At least one of the NSS 1645a-n can include one or more component or functionality of visual NSS 105 or auditory NSS 905. [00396] Still referring to FIG. 16B, and in further detail, the NSOS 1605 can include at least stimuli orchestration component 1610. The stimuli orchestration component 1610 can be designed and constructed to perform neural stimulation using multiple modalities of stimulation. The stimuli orchestration component 1610 or NSOS 1605 can interface with at least one of the signaling components 1630a-n, at least one of the filtering components 1635a- n or at least one of the feedback components 1640a-n. One or more of the signaling components 1630a-n can be a same type of signaling component or a different type of signaling component. The type of signaling component can correspond to a mode of stimulation. For example, multiple types of signaling components 1630a-n can correspond to visual signaling components or auditory signaling components. In some cases, at least one of the signaling components 1630a-n includes a visual signaling component 150 such as a light source, LED, laser, tablet computing device, or virtual reality headset. At least one of the signaling components includes an audio signaling component 950, such as headphones, speakers, cochlear implants, or air jets.

[00397] One or more of the filtering components 1635a-n can be a same type of filtering component or a different type of filtering component. One or more of the feedback components 1640a-n can be a same type of feedback component or a different type of feedback component. For example, the feedback components 1640a-n can include an electrode, dry electrode, gel electrode, saline soaked electrode, adhesive-based electrodes, a temperature sensor, heart or pulse rate monitor, physiological sensor, ambient light sensor, ambient temperature sensor, sleep status via actigraphy, blood pressure monitor, respiratory rate monitor, brain wave sensor, EEG probe, EOG probes configured measure the corneo- retinal standing potential that exists between the front and the back of the human eye, accelerometer, gyroscope, motion detector, proximity sensor, camera, microphone, or photo detector.

[00398] The stimuli orchestration component 1610 can include or be configured with an interface to communicate with different types of signaling components 1630a-n, filtering components 1635a-n or feedback components 1640a-n. The NSOS 1605 or stimuli orchestration component 1610 can interface with system intermediary to one of the signaling components 1630a-n, filtering components 1635a-n, or feedback components 1640a-n. For example, the stimuli orchestration component 1610 can interface with the visual NSS 105 depicted in FIG. 1 or auditory NSS 905 depicted in FIG. 9. Thus, in some embodiments, the stimuli orchestration component 1610 or NSOS 1605 can indirectly interface with at least one of the signaling components 1630a-n, filtering components 1635a-n, or feedback components 1640a-n.

[00399] The stimuli orchestration component 1610 ( e.g . , via the interface) can ping each of the signaling components 1630a-n, filtering components 1635a-n, and feedback components 1640a-n to determine information about the components. The information can include a type of the component (e.g., visual, auditory, attenuator, optical filter, temperature sensor, or light sensor), configuration of the component (e.g, frequency range, amplitude range), or status information (e.g, standby, ready, online, enabled, error, fault, offline, disabled, warning, service needed, availability, or battery level).

[00400] The stimuli orchestration component 1610 can instruct or cause at least one of the signaling components 1630a-n to generate, transmit or otherwise provide a signal that can be perceived, received or observed by the brain and affect a frequency of neural oscillations in at least one region or portion of a subject’s brain. The signal can be perceived via various means, including, for example, optical nerves or cochlear cells.

[00401] The stimuli orchestration component 1610 can access the data repository 1615 to retrieve profile information 1620 and a policy 1625. The profile information 1620 can include profile information 145 or profile information 945. The policy 1625 can include a multi-modal stimulation policy. The policy 1625 can indicate a multi-modal stimulation program. The stimuli orchestration component 1610 can apply the policy 1625 to profile information to determine a type of stimulation ( e.g . , visual or auditory) and determine a value for a parameter for each type of stimulation (e.g., amplitude, frequency, wavelength, color, etc.). The stimuli orchestration component 1610 can apply the policy 1625 to the profile information 1620 and feedback information received from one or more feedback components 1640a-n to determine or adjust the type of stimulation (e.g, visual or auditory) and determine or adjust the value parameter for each type of stimulation (e.g, amplitude, frequency, wavelength, color, etc.). The stimuli orchestration component 1610 can apply the policy 1625 to profile information to determine a type of filter to be applied by at least one of the filtering components 1635a-n (e.g, audio filter or visual filter) and determine a value for a parameter for the type of filter (e.g, frequency, wavelength, color, sound attenuation, etc.). The stimuli orchestration component 1610 can apply the policy 1625 to profile information and feedback information received from one or more feedback components 1640a-n to determine or adjust the type of filter to be applied by at least one of the filtering components 1635a-n (e.g, audio filter or visual filter) and determine or adjust the value for the parameter for filter (e.g, frequency, wavelength, color, sound attenuation, etc.).

[00402] The NSOS 1605 can obtain the profile information 1620 via a subject assessment module 1650. The subject assessment module 1650 can be designed and constructed to determine, for one or more subjects, information that can facilitate neural stimulation via one or more modes of stimulation. The subject assessment module 1650 can receive, obtain, detect, determine or otherwise identify the information via feedback components 1640a-n, surveys, queries, questionnaires, prompts, remote profile information accessible via a network, diagnostic tests, or historical treatments.

[00403] The subject assessment module 1650 can receive the information prior to initiating neural stimulation, during neural stimulation, or after neural stimulation. For example, the subject assessment module 1650 can provide a prompt with a request for information prior to initiating the neural stimulation session. The subject assessment module 1650 can provide a prompt with a request for information during the neural stimulation session. The subject assessment module 1650 can receive feedback from feedback component 1640a-n (e.g, an EEG probe) during the neural stimulation session. The subject assessment module 1650 can provide a prompt with a request for information subsequent to termination of the neural stimulation session. The subject assessment module 1650 can receive feedback from feedback component 1640a-n subsequent to termination of the neural stimulation session.

[00404] The subject assessment module 1650 can use the information to determine an effectiveness of a modality of stimulation ( e.g ., visual stimulation or auditory stimulation) or a type of signal (e.g., light pulse from a laser or LED source, ambient light flicker, or image pattern displayed by a tablet computing device). For example, the subject assessment module 1650 can determine that the desired neural stimulation resulted from a first mode of stimulation or first type of signal, while the desired neural stimulation did not occur or took longer to occur with the second mode of stimulation or second type of signal. The subject assessment module 1650 can determine that the desired neural stimulation was less pronounced from the second mode of stimulation or second type of signal relative to the first mode of stimulation or first type of signal based on feedback information from a feedback component 1640a-n.

[00405] The subject assessment module 1650 can determine the level of effectiveness of each mode or type of stimulation independently, or based on a combination of modes or types of stimulation. A combination of modes of stimulation can refer to transmitting signals from different modes of stimulation at the same or substantially similar time. A combination of modes of stimulation can refer to transmitting signals from different modes of stimulation in an overlapping manner. A combination of modes of stimulation can refer to transmitting signals from different modes of stimulation in a non-overlapping manner, but within a time interval from one another (e.g, transmit a signal pulse train from a second mode of stimulation within 0.5 seconds, 1 second, 1.5 seconds, 2 seconds, 2.5 seconds, 3 seconds, 5 seconds, 7 seconds, 10 seconds, 12 seconds, 15 seconds, 20 seconds, 30 seconds, 45 seconds, 60 seconds, 1 minute, 2 minutes 3 minutes 5 minutes, 10 minutes, or other time interval where the effect on the frequency of neural oscillation by a first mode can overlap with the second mode). [00406] The subject assessment module 1650 can aggregate or compile the information and update the profile data structure 1620 stored in data repository 1615. In some cases, the subject assessment module 1650 can update or generate a policy 1625 based on the received information. The policy 1625 or profile information 1620 can indicate which modes or types of stimulation are more likely to have a desired effect on neural stimulation, while reducing side effects.

[00407] The stimuli orchestration component 1610 can instruct or cause multiple signaling components 1630a-n to generate, transmit or otherwise provide different types of stimulation or signals pursuant to the policy 1625, profile information 1620 or feedback information detected by feedback components 1640a-n. The stimuli orchestration component 1610 can cause multiple signaling components 1630a-n to generate, transmit or otherwise provide different types of stimulation or signals simultaneously or at substantially the same time. For example, a first signaling component 1630a can transmit a first type of stimulation at the same time as a second signaling component 1630b transmits a second type of stimulation. The first signaling component 1630a can transmit or provide a first set of signals, pulses or stimulation at the same time the second signaling component 1630b transmits or provides a second set of signals, pulses or stimulation. For example, a first pulse from a first signaling component 1630a can begin at the same time or substantially the same time ( e.g ., 1%, 2%, 3%, 4%, 5%, 6%, 7%, 10%, 15%, 20%) as a second pulse from a second signaling component 1630b. First and second pulses can end at the same time or substantially same time. In another example, a first pulse train can be transmitted by the first signaling component 1630a at the same or substantially similar time as a second pulse train transmitted by the second signaling component 1630b.

[00408] The stimuli orchestration component 1610 can cause multiple signaling components 1630a-n to generate, transmit or otherwise provide different types of stimulation or signals in an overlapping manner. The different pulses or pulse trains may overlap one another, but may not necessary being or end at a same time. For example, at least one pulse in the first set of pulses from the first signaling component 1630a can at least partially overlap, in time, with at least one pulse from the second set of pulses from the second signaling component 1630b. For example, the pulses can straddle one another. In some cases, a first pulse train transmitted or provided by the first signaling component 1630a can at least partially overlap with a second pulse train transmitted or provided by the second signaling component 1630b. The first pulse train can straddle the second pulse train.

[00409] The stimuli orchestration component 1610 can cause multiple signaling components 1630a-n to generate, transmit or otherwise provide different types of stimulation or signals such that they are received, perceived or otherwise observed by one or more regions or portions of the brain at the same time, simultaneously or at substantially the same time. The brain can receive different modes of stimulation or types of signals at different times. The duration of time between transmission of the signal by a signaling component 1630a-n and reception or perception of the signal by the brain can vary based on the type of signal (e.g., visual, auditory), parameter of the signal (e.g, velocity or speed of the wave, amplitude, frequency, wavelength), or distance between the signaling component 1630a-n and the nerves or cells of the subject configured to receive the signal (e.g, eyes or ears). The stimuli orchestration component 1610 can offset or delay the transmission of signals such that the brain perceives the different signals at the desired time. The stimuli orchestration component 1610 can offset or delay the transmission of a first signal transmitted by a first signaling component 1630a relative to transmission of a second signal transmitted by a second signaling component 1630b. The stimuli orchestration component 1610 can determine an amount of an offset for each type of signal or each signaling component 1630a-n relative to a reference clock or reference signal. The stimuli orchestration component 1610 can be preconfigured or calibrated with an offset for each signaling component 1630a-n.

[00410] The stimuli orchestration component 1610 can determine to enable or disable the offset based on the policy 1625. For example, the policy 1625 may indicate to transmit multiple signals at the same time, in which case the stimuli orchestration component 1610 may disable or not use an offset. In another example, the policy 1625 may indicate to transmit multiple signals such that they are perceived by the brain at the same time, in which case the stimuli orchestration component 1610 may enable or use the offset.

[00411] In some embodiments, the stimuli orchestration component 1610 can stagger signals transmitted by different signaling components 1630a-n. For example, the stimuli orchestration component 1610 can stagger the signals such that the pulses from different signaling components 1630a-n are non-overlapping. The stimuli orchestration component 1610 can stagger pulse trains from different signaling components 1630a-n such that they are non-overlapping. The stimuli orchestration component 1610 can set parameters for each mode of stimulation or signaling component 1630a-n such that the signals they are non-overlapping. [00412] Thus, the stimuli orchestration component 1610 can set parameters for signals transmitted by one or more signaling components 1630a-n such that the signals are transmitted in a synchronously or asynchronously, or perceived by the brain synchronously or asynchronously. The stimuli orchestration component 1610 can apply the policy 1625 to available signaling components 1630a-n to determine the parameters to set for each signaling component 1630a-n for the synchronous or asynchronous transmission. The stimuli orchestration component 1610 can adjust parameters such as a time delay, phase offset, frequency, pulse rate interval, or amplitude to synchronize the signals.

[00413] In some embodiments, the NSOS 1605 can adjust or change the mode of stimulation or a type of signal based on feedback received from a feedback component 1640a- n. The stimuli orchestration component 1610 can adjust the mode of stimulation or type of signal based on feedback on the subject, feedback on the environment, or a combination of feedback on the subject and the environment. Feedback on the subject can include, for example, physiological information, temperature, attention level, level of fatigue, activity e.g ., sitting, laying down, walking, biking, or driving), vision ability, hearing ability, side effects (e.g., pain, migraine, ringing in ear, or blindness), or frequency of neural oscillation at a region or portion of the brain (e.g. , EEG probes). Feedback information on the environment can include, for example, ambient temperature, ambient light, ambient sound, battery information, or power source.

[00414] The stimuli orchestration component 1610 can determine to maintain or change an aspect of the stimulation treatment based on the feedback. For example, the stimuli orchestration component 1610 can determine that the neurons are not oscillating at the desired frequency in response to the first mode of stimulation. Responsive to determining that the neurons are not oscillating at the desired frequency, the stimuli orchestration component 1610 can disable the first mode of stimulation and enable a second mode of stimulation. The stimuli orchestration component 1610 can again determine (e.g, via feedback component 1640a) that the neurons are not oscillating at the desired frequency in response to the second mode of stimulation. Responsive to determining that the neurons are still not oscillating at the desired frequency, the stimuli orchestration component 1610 can increase an amplitude of the signal corresponding to the second mode of stimulation. The stimuli orchestration component 1610 can determine that the neurons are oscillating at the desired frequency in response to increasing the amplitude of a signal corresponding to the second mode of stimulation.

[00415] The stimuli orchestration component 1610 can monitor the frequency of neural oscillations at a region or portion of the brain. The stimuli orchestration component 1610 can determine that neurons in a first region of the brain are oscillating at the desired frequency, whereas neurons in a second region of the brain are not oscillating at the desired frequency. The stimuli orchestration component 1610 can perform a lookup in the profile data structure 1620 to determine a mode of stimulation or type of signal that maps to the second region of the brain. The stimuli orchestration component 1610 can compare the results of the lookup with the currently enabled mode of stimulation to determine that a third mode of stimulation is more likely to cause the neurons in the second region of the brain to oscillate at the desired frequency. Responsive to the determination, the stimuli orchestration component 1610 can identify a signaling component 1630a-n configured to generate and transmit signals corresponding to the selected third mode of stimulation, and instruct or cause the identified signaling component 1630a-n to transmit the signals.

[00416] In some embodiments, the stimuli orchestration component 1610 can determine, based on feedback information, that a mode of stimulation is likely to affect the frequency of neural oscillation, or unlikely to affect the frequency of neural oscillation. The stimuli orchestration component 1610 can select a mode of stimulation from a plurality of modes of stimulation that is most likely to affect the frequency of neural stimulation or result in a desired frequency of neural oscillation. If the stimuli orchestration component 1610 determines, based on the feedback information, that a mode of stimulation is unlikely to affect the frequency of neural oscillation, the stimuli orchestration component 1610 can disable the mode of stimulation for a predetermined duration or until the feedback information indicates that the mode of stimulation would be effective.

[00417] The stimuli orchestration component 1610 can select one or more modes of stimulation to conserve resources or minimize resource utilization. For example, the stimuli orchestration component 1610 can select one or more modes of stimulation to reduce or minimize power consumption if the power source is a battery or if the battery level is low. In another example, the stimuli orchestration component 1610 can select one or more modes of stimulation to reduce heat generation if the ambient temperature is above a threshold or the temperature of the subject is above a threshold. In another example, the stimuli orchestration component 1610 can select one or more modes of stimulation to increase the level of attention if the stimuli orchestration component 1610 determines that the subject is not focusing on the stimulation ( e.g ., based on eye tracking or an undesired frequency of neural oscillations).

Neural Stimulation Via Visual Stimulation and Auditory Stimulation [00418] FIG. 17A is a block diagram depicting an embodiment of a system for neural stimulation via visual stimulation and auditory stimulation. The system 1700 can include the NSOS 1605. The NSOS 1605 can interface with the visual NSS 105 and the auditory NSS 905. The visual NSS 105 can interface or communicate with the visual signaling component 150, filtering component 155, and feedback component 160. The auditory NSS 905 can interface or communicate with the audio signaling component 950, filtering component 955, and feedback component 960.

[00419] To provide neural stimulation via visual stimulation and auditory stimulation, the NSOS 1605 can identify the types of available components for the neural stimulation session. The NSOS 1605 can identify the types of visual signals the visual signaling component 150 is configured to generate. The NSOS 1605 can also identify the type of audio signals the audio signaling component 950 is configured to generate. The NSOS 1605 can be configured about the types of visual signals and audio signals the components 150 and 950 are configured to generate. The NSOS 1605 can ping the components 150 and 950 for information about the components 150 and 950. The NSOS 1605 can query the components, send an SNMP request, broadcast a query, or otherwise determine information about the available visual signaling component 150 and audio signaling component 950.

[00420] For example, the NSOS 1605 can determine that the following components are available for neural stimulation: the visual signaling component 150 includes the virtual reality headset 401 depicted in FIG. 4C; the audio signaling component 950 includes the speaker 1205 depicted in FIG. 12B; the feedback component 160 includes an ambient light sensor 605, an eye tracker 605 and an EEG probe depicted in FIG. 4C; the feedback component 960 includes a microphone 1210 and feedback sensor 1225 depicted in FIG. 12B; and the filtering component 955 includes a noise cancellation component 1215. The NSOS 1605 can further determine an absence of filtering component 155 communicatively coupled to the visual NSS 105. The NSOS 1605 can determine the presence (available or online) or absence (offline) of components via visual NSS 105 or auditory NSS 905. The NSOS 1605 can further obtain identifiers for each of the available or online components.

[00421] The NSOS 1605 can perform a lookup in the profile data structure 1620 using an identifier of the subject to identify one more types of visual signals and audio signals to provide to the subject. The NSOS 1605 can perform a lookup in the profile data structure 1620 using identifiers for the subject and each of the online components to identify one more types of visual signals and audio signals to provide to the subject. The NSOS 1605 can perform a lookup up in the policy data structure 1625 using an identifier of the subject to obtain a policy for the subject. The NSOS 1605 can perform a lookup in the policy data structure 1625 using identifiers for the subject and each of the online components to identify a policy for the types of visual signals and audio signals to provide to the subject.

[00422] FIG. 17B is a diagram depicting waveforms used for neural stimulation via visual stimulation and auditory stimulation in accordance with an embodiment. FIG. 17B illustrates example sequences or a set of sequences 1701 that the stimuli orchestration component 1610 can generate or cause to be generated by one or more visual signaling components 150 or audio signal components 950. The stimuli orchestration component 1610 can retrieve the sequences from a data structure stored in data repository 1615 of NSOS 1605, or a data repository corresponding to NSS 105 or NSS 905. The sequences can be stored in a table format, such as Table 1 below. In some embodiments, the NSOS 1605 can select predetermined sequences to generate a set of sequences for a treatment session or time period, such as the set of sequences in Table 1. In some embodiments, the NSOS 1605 can obtain a predetermined or preconfigured set of sequences. In some embodiments, the NSOS 1605 can construct or generate the set of sequences or each sequence based on information obtained from the subject assessment module 1650. In some embodiments, the NSOS 1605 can remove or delete sequences from the set of sequences based on feedback, such as adverse side effects. The NSOS 1605, via subject assessment module 1650, can include sequences that are more likely to stimulate neurons in a predetermined region of the brain to oscillate at a desired frequency.

[00423] The NSOS 1605 can determine, based on the profile information, policy, and available components, to use the following sequences illustrated in example TABLE 2 provide neural stimulation using both visual signals and auditory signals.

TABLE 2. Audio and Video Stimulation Sequences

[00424] As illustrated in TABLE 2, each waveform sequence can include one or more characteristics, such as a sequence identifier, a mode, a signal type, one or more signal parameters, a modulation or stimulation frequency, and a timing schedule. As illustrated in FIG. 17B and TABLE 2, the sequence identifiers are 1755, 1760, 1765, 1765, 1770, 1775, and 1760.

[00425] The stimuli orchestration component 1610 can receive the characteristics of each sequence. The stimuli orchestration component 1610 can transmit, configure, load, instruct or otherwise provide the sequence characteristics to a signaling component 1630a-n. In some embodiments, the stimuli orchestration component 1610 can provide the sequence characteristics to the visual NSS 105 or the auditory NSS 905, while in some cases the stimuli orchestration component 1610 can directly provide the sequence characteristics to the visual signaling component 150 or audio signaling component 950.

[00426] The NSOS 1605 can determine, from the TABLE 2 data structure, that the mode of stimulation for sequences 1755, 1760 and 1765 is visual by parsing the table and identifying the mode. The NSOS 1605, responsive to determine the mode is visual, can provide the information or characteristics associated with sequences 1755, 1760 and 1765 to the visual NSS 105. The NSS 105 ( e.g ., via the light generation module 110) can parse the sequence characteristics and then instruct the visual signaling component 150 to generate and transmit the corresponding visual signals. In some embodiments, the NSOS 1605 can directly instruct the visual signaling component 150 to generate and transmit visual signals corresponding to sequences 1755, 1760 and 1765.

[00427] The NSOS 1605 can determine, from the TABLE 2 data structure, that the mode of stimulation for sequences 1770, 1775 and 1780 is audio by parsing the table and identifying the mode. The NSOS 1605, responsive to determine the mode is audio, can provide the information or characteristics associated with sequences 1770, 1775 and 1780 to the auditory NSS 905. The NSS 905 (e.g., via the light generation module 110) can parse the sequence characteristics and then instruct the audio signaling component 950 to generate and transmit the corresponding audio signals. In some embodiments, the NSOS 1605 can directly instruct no the visual signaling component 150 to generate and transmit visual signals corresponding to sequences 1770, 1775 and 1780.

[00428] For example, the first sequence 1755 can include a visual signal. The signal type can include light pulses 235 generated by a light source 305 that includes a laser. The light pulses can include light waves having a wavelength corresponding to the color red in the visible spectrum. The intensity of the light can be set to low. An intensity level of low can correspond to a low contrast ratio ( e.g ., relative to the level of ambient light) or a low absolute intensity. The pulse width for the light burst can correspond to pulsewidth 230a depicted in FIG. 2C. The stimulation frequency can be 40 Hz, or correspond to a pulse rate interval (“PRI”) of 0.025 seconds. The first sequence 1655 can run from to to ts. The first sequence 1655 can run for the duration of the session or treatment. The first sequence 1655 can run while one or more other sequences are other running. The time intervals can refer to absolute times, time periods, number of cycles, or other event. The time interval from to to ts can be, for example, 1 minute, 2 minutes, 3 minutes, 4 minutes, 5 minutes, 7 minutes, 10 minutes, 12 minutes, 15 minutes, 20 minutes or more or less. The time interval can be cut short or terminated by the subject or responsive to feedback information. The time intervals can be adjusted based on profile information or by the subject via an input device.

[00429] The second sequence 1760 can be another visual signal that begins atti and ends at U. The second sequence 1760 can include a signal type of a checkerboard image pattern that is provided by a display screen of a tablet. The signal parameters can include the colors black and white such that the checkerboard alternates black and white squares. The intensity can be high, which can correspond to a high contrast ratio relative to ambient light; or there can be a high contrast between the objects in the checkerboard pattern. The pulse width for the checkerboard pattern can be the same as the pulse width 230a as in sequence 1755. Sequence 1760 can begin and end at a different time than sequence 1755. For example, sequence 1760 can begin at ti, which can be offset from to by 5 seconds, 10 seconds, 15 seconds, 20 seconds, 20 seconds, 30 seconds, 1 minute, 2 minutes, 3 minutes, or more or less. The visual signaling component 150 can initiate the second sequence 1760 at ti, and terminate the second sequence at U. Thus, the second sequence 1760 can overlap with the first sequence 1755.

[00430] While pulse trains or sequences 1755 and 1760 can overlap with one another, the pulses 235 of the second sequence 1760 may not overlap with the pulses 235 of the first sequence 1755. For example, the pulses 235 of the second sequence 1760 can be offset from the pulses 235 of the first sequence 1755 such that they are non-overlapping. [00431] The third sequence 1765 can include a visual signal. The signal type can include ambient light that is modulated by actuated shutters configured on frames ( e.g ., frames 400 depicted in FIG. 4B). The pulse width can vary from 230c to 230a in the third sequence 1765. The stimulation frequency can still be 40 Hz, such that the PRI is the same as the PRI in sequence 1760 and 1755. The pulses 235 of the third sequence 1765 can at least partially overlap with the pulses 235 of sequence 1755, but may not overlap with the pulses 235 of the sequence 1760. Further, the pulse 235 can refer to block ambient light or allowing ambient light to be perceived by the eyes. In some embodiments, pulse 235 can correspond to blocking ambient light, in which case the laser light pulses 1755 may appear to have a higher contrast ratio. In some cases, the pulses 235 of sequence 1765 can correspond to allowing ambient light to enter the eyes, in which case the contrast ratio for pulses 235 of sequence 1755 may be lower, which may mitigate adverse side effects.

[00432] The fourth sequence 1770 can include an auditory stimulation mode. The fourth sequence 1770 can include upchirp pulses 1035. The audio pulses can be provided via headphones or speakers 1205 of FIG. 12B. For example, the pulses 1035 can correspond to modulating music played by an audio player 1220 as depicted in FIG. 12B. The modulation can range from M a to M c . The modulation can refer to modulating the amplitude of the music. The amplitude can refer to the volume. Thus, the NSOS 1605 can instruct the audio signaling component 950 to increase the volume from a volume level M a to a volume level M c during a duration PW 1030a, and then return the volume to a baseline level or muted level in between pulses 1035. The PRI 240 can be .025, or correspond to a 40 Hz stimulation frequency. The NSOS 1605 can instruct the fourth sequence 1770 to begin at Ϊ 3 , which overlaps with visual stimulation sequences 1755, 1760 and 1765.

[00433] The fifth sequence 1775 can include another audio stimulation mode. The fifth sequence 1775 can include acoustic bursts. The acoustic bursts can be provided by the headphones or speakers 1205 of FIG. 12B. The sequence 1775 can include pulses 1035. The pulses 1035 can vary from one pulse to another pulse in the sequence. The fifth waveform 1775 can be configured to re-focus the subject to increase the subject’s attention level to the neural stimulation. The fifth sequence 1775 can increase the subject’s attention level by varying parameters of the signal from one pulse to the other pulse. The fifth sequence 1775 can vary the frequency from one pulse to the other pulse. For example, the first pulse 1035 in sequence 1775 can have a higher frequency than the previous sequences. The second pulse can be an upchirp pulse having a frequency that increases from a low frequency to a high frequency. The third pulse can be a sharper upchirp pulse that has frequency that increases from an even lower frequency to the same high frequency. The fifth pulse can have a low stable frequency. The sixth pulse can be a downchirp pulse going from a high frequency to the lowest frequency. The seventh pulse can be a high frequency pulse with a small pulsewidth. The fifth sequence 1775 can being at U and end at ti. The fifth sequence can overlap with sequence 1755; and partially overlap with sequence 1765 and 1770. The fifth sequence may not overlap with sequence 1760. The stimulation frequency can be 39.8 Hz. [00434] The sixth sequence 1780 can include an audio stimulation mode. The signal type can include pressure or air provided by an air jet. The sixth sequence can begin at t 6 and end att 8 . The sixth sequence 1780 can overlap with sequence 1755, and partially overlap with sequences 1765 and 1775. The sixth sequence 1780 can end the neural stimulation session along with the first sequence 1755. The air jet can provide pulses 1035 with pressure ranging from a high pressure M c to a low pressure M a . The pulse width can be 1030a, and the stimulation frequency can be 40 Hz.

[00435] The NSOS 1605 can adjust, change, or otherwise modify sequences or pulses basd on feedback. In some embodiments, the NSOS 1605 can determine, based on the profile information, policy, and available components, to provide neural stimulation using both visual signals and auditory signals. The NSOS 1605 can determine to synchronize the transmit time of the first visual pulse train and the first audio pulse train. The NSOS 1605 can transmit the first visual pulse train and the first audio pulse train for a first duration ( e.g ., 1 minute, 2 minutes, or 3 minutes). At the end of the first duration, the NSOS 1605 can ping an EEG probe to determine a frequency of neural oscillation in a region of the brain. If the frequency of oscillation is not at the desired frequency of oscillation, the NSOS 1605 can select a sequence out of order or change the timing schedule of a sequence.

[00436] For example, the NSOS 1605 can ping a feedback sensor at ti. TheNSOS 1605 can determine, at ti, that neurons of the primary visual cortex are oscillating at the desired frequency. Thus, the NSOS 1605 can determine to forego transmitting sequences 1760 and 1765 because neurons of the primary visual cortex are already oscillating at the desired frequency. The NSOS 1605 can determine to disable sequences 1760 and 1765. The NSOS 1605, responsive to the feedback information, can disable the sequences 1760 and 1765. The NSOS 1605, responsive to the feedback information, can modify a flag in the data structure corresponding to TABLE 2 to indicate that the sequences 1760 and 1765 are disabled. [00437] The NSOS 1605 can receive feedback information at h. At h, the NSOS 1605 can determine that the frequency of neural oscillation in the primary visual cortex is different from the desired frequency. Responsive to determining the difference, the NSOS 1605 can enable or re-enable sequence 1765 in order to stimulate the neurons in the primary visual cortex such that the neurons may oscillate at the desired frequency.

[00438] Similarly, the NSOS 1605 can enable or disable audio stimulation sequences 1770, 1775 and 1780 based on feedback related to the auditory cortex. In some cases, the NSOS 1605 can determine to disable all audio stimulation sequences if the visual sequence 1755 is successfully affecting the frequency of neural oscillations in the brain at each time period ti, h, t3, U , ts, t 6 , t7, and ts. In some cases, the NSOS 1605 can determine that the subject is not paying attention at U , and go from only enabling visual sequence 1755 directly to enabling audio sequence 1755 to re-focus the user using a different stimulation mode.

Method for Neural Stimulation Via Visual Stimulation and Auditory Stimulation [00439] FIG. 18 is a flow diagram of a method for neural stimulation via visual stimulation and auditory stimulation in accordance with an embodiment. The method 180 can be performed by one or more system, component, module or element depicted in FIGS. 1- 17B, including, for example, a neural stimulation orchestration component or neural stimulations system. In brief overview, the NSOS can identify multiple modes of signals to provide at block 1805. At block 1810, the NSOS can generate and transmit the identified signals corresponding to the multiple modes. At 1815 the NSOS can receive or determine feedback associated with neural activity, physiological activity, environmental parameters, or device parameters. At 1820 the NSOS can manage, control, or adjust the one or more signals based on the feedback.

[00440] While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any inventions or of what can be claimed, but rather as descriptions of features specific to particular embodiments of particular aspects. Certain features described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features can be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination can be directed to a subcombination or variation of a subcombination.

[00441] Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing can be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated in a single software product or packaged into multiple software products. [00442] References to “or” may be construed as inclusive so that any terms described using “or” may indicate any of a single, more than one, and all of the described terms. References to at least one of a conjunctive list of terms may be construed as an inclusive OR to indicate any of a single, more than one, and all of the described terms. For example, a reference to “at least one of ‘A’ and ‘B’” can include only ‘A’, only ‘B’, as well as both ‘A’ and ‘B’.

[00443] Thus, particular exemplary embodiments of the subject matter have been described. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results.

[00444] The present technology, including the systems, methods, devices, components, modules, elements or functionality described or illustrated in, or in association with, the figures can treat, prevent, protect against or otherwise affect brain atrophy and disorders, conditions and diseases associated with brain atrophy.

Neural Stimlation System with Sleep-Related Monitoring Modules [00445] FIG. 33 provides a neural stimulation system comprising a stimulus delivery system coupled to an analysis and monitoring system. In some embodiments, the present technological solution comprises a stimulus delivery system which includes one or more of: one or more Audio Stimulus Module (110), one or more Visual Stimulus Module (120). These modules may be in addition to tactile or other stimulus modules (not shown). These modules provide the delivery of audio or visual stimulus at specific parameter values. In some embodiments the values of these parameters are responsive to one or more of: one or more Audio Monitoring Module (111), one or more Visual Monitoring Module (121).

[00446] In some embodiments, the present technological solution includes one or more of: one or more Feedback Module (150) collecting, storing, or processing feedback from users or third parties; one or more Profile Module (161) storing and processing profile or demographic information related to one or more users or third parties, or of populations of users or third parties; one or more History Module (162) storing or processing history and logs related to one or more users or third parties, or of populations of users or third parties; one or more Monitoring Module (163), collecting, storing, logging, and/or analysing aspects of one or more users or third parties, including but not limited to: aspects of the environment, state, behavior, input, responses, diagnosis, disease progression, compliance, engagement, mood, adherence. In some embodiments the present technological solution includes one or more Brain Wave Monitoring Module (190) measuring and analysing brain wave activity in one or more users, including but not limited to detecting and characterizing gamma wave power and gamma entrainment.

[00447] In some embodiments, the present technological solution includes one or more of: one or more Actigraphy Monitoring Module (130), one or more Sleep Analysis Module (140). In some embodiments, one or more Sleep Analysis Module is responsive, at least in part, to information communicated from one or more Actigraphy Monitoring Module. In some embodiments, a Sleep Analysis Module performs sleep analysis based at least in part on actigraphy information collected at least in part by an Actigraphy Monitoring Module. In some embodiments, Sleep Analysis Module performs one or more analysis steps described in FIG 37.

[00448] In some embodiments, one or more of an Audio Stimulus Module, a Visual Stimulus Module, and/or a Stimulus Delivery System (170) managing or incorporating one or more stimulus modules, may be responsive to one or more of: one or more analysis Analysis and Monitoring System (130) and/or monitoring modules, including but not limited to: one or more Feedback Module (150), one or more Profile Module (161), one or more History Module (162), one or more Monitoring Module (163), one or more Sleep Analysis Module (140), one or more Actigraphy Monitoring Module (130), one or more Brain Wave Monitoring Module (190), and/or one or more Stimulus Delivery System (170) managing or incorporating one or more analysis and monitoring module.

COMBINATION THERAPIES

[00449] In one aspect, the present disclosure provides combination therapies comprising the administration of one or more additional therapeutic regimens in conjunction with methods described herein. In some embodiments, the additional therapeutic regimens are directed to the treatment or prevention of the disease or disorder targeted by methods of the present technology. [00450] In some embodiments, the additional therapeutic regimens comprise administration of one or more pharmacological agentsthat are used to treat or prevent disorders targeted by methods of the present technology. In some embodiments, methods of the present technology facilitate the use of lower doses of pharmacological agents to treat or prevent targeted disorders.

[00451] In some embodiments, the additional therapeutic regimens comprise non- pharmacological therapies that are used to treat or prevent disorders targeted by methods of the present technology such as, but not limited to, cognitive or physical therapeutic regimens. [00452] In some embodiments, a pharmacological agent is administered in conjunction with therapeutic methods described herein. In some embodiments, the pharmacological agent is directed to inducing a relaxed state in a subject administered methods of the present technology. In some embodiments, the pharmacological agent is directed to inducing a heightened state of awareness in a subject administered methods of the present technology. In some embodiments, the pharmacological agent is directed to modulating neuronal and/or synaptic activity. In some embodiments, the agent promotes neuronal and/or synaptic activity. In some embodiments, the agent targets a cholinergic receptor. In some embodiments, the agent is a cholinergic receptor agonist. In some embodiments, the agent is acetylcholine or an acetylcholine derivative. In some embodiments, the agent is an acetylcholinesterase inhibitor.

[00453] In some embodiments, the agent inhibits neuronal and/or synaptic activity. In some embodiments, the agent is a cholinergic receptor antagonist. In some embodiments, the agent is an acetylcholine inhibitor or an acetylcholine derivative inhibitor. In some embodiments, the agent is acetylcholinesterase or an acetylcholinesterase derivative.

EMBODIMENTS

[00454] The following non-limiting embodiments provide illustrative examples of the invention, but do not limit the scope of the invention.

[00455] Embodiment 1. A method of improving a sleep quality experienced by a subj ect, the method of improving a sleep quality comprising administering an audio and a visual stimulus to the subject at a frequency effective to reduce sleep fragmentation.

[00456] Embodiment 2. The method of Embodiment 1, wherein the frequency is between 20 and 60 Hertz.

[00457] Embodiment 3. The method of Embodiment 1, wherein the frequency is about 40 Hertz. [00458] Embodiment 4. The method of Embodiment 1, wherein the method of improving sleep comprises reducing a duration of nighttime active periods experienced during sleep.

[00459] Embodiment 5. The method of Embodiment 1, wherein the method of improving sleep comprises reducing a number of nighttime active periods experienced during sleep.

[00460] Embodiment 6. The method of Embodiment 1, wherein the method of improving sleep comprises increasing a duration of slow wave sleep or a duration of rapid eye movement sleep experienced by the subject.

[00461] Embodiment 7. The method of Embodiment 1, wherein the subject has Alzheimer’s disease.

[00462] Embodiment 8. The method of Embodiment 1, wherein the subject has Mild Cognitive Impairment.

[00463] Embodiment 9. The method of Embodiment 4, wherein reducing the duration of nighttime active periods comprises reducing the duration of active periods by at least half. [00464] Embodiment 10. A method of prolonging nighttime undisturbed restful periods in a subject, the method of prolonging nighttime undisturbed restful periods comprising administering a noninvasive sensory stimulus comprising audio stimulus and visual stimulus to the subject at a frequency effective to induce synchronized gamma oscillations in at least one brain region of the subject.

[00465] Embodiment 11. The method of Embodiment 10, wherein the method comprises reducing the amyloid beta burden in the at least one brain region of the subject.

[00466] Embodiment 12. The method of Embodiment 10, wherein the method comprises reducing the frequency of nighttime active periods experienced by the subject.

[00467] Embodiment 13. The method of Embodiment 10, wherein the method comprises increasing the duration of slow wave sleep experienced by the subject.

[00468] Embodiment 14. The method of Embodiment 10, wherein the method comprises increasing the duration of rapid eye movement sleep experienced by the subject.

[00469] Embodiment 15. The method of Embodiment 10, wherein the method comprises reducing the duration of nighttime active periods experienced by the subject.

[00470] Embodiment 16. The method of Embodiment 10, wherein the method is repeated regularly.

[00471] Embodiment 17. The method of Embodiment 10, wherein the subject has Alzheimer’s disease or Mild Cognitive Impairment. [00472] Embodiment 18. The method of Embodiment 17, wherein the method further comprises slowing the progression of cognitive impairment associated with Alzheimer’s disease.

[00473] Embodiment 19. A method of treating a sleep disorder in a subject in need thereof, the method of treating the sleep disorder comprising administering an audio stimulus and a visual stimulus at a frequency effective to improve brain wave coherence.

[00474] Embodiment 20. The method of Embodiment 19, wherein the frequency effective to improve brainwave coherence is between 5 and 100 Hertz.

[00475] Embodiment 21. The method of Embodiment 19, wherein the frequency effective to improve brainwave coherence is about 40 Hertz.

[00476] Embodiment 22. The method of Embodiment 19, wherein the subject is at risk of developing Alzheimer’s Disease.

[00477] Embodiment 23. The method of Embodiment 19, wherein the sleep disorder comprises insomnia.

[00478] Embodiment 24. The method of Embodiment 19, wherein the subject experiences diminished slow wave sleep, reduced rapid eye movement sleep, or a combination thereof.

[00479] Embodiment 25. The method of Embodiment 19, wherein the sleep disorder worsens a cognitive function.

EXAMPLES

Example 1. Human Clinical Study of Safety Efficacy and Results of Treatment

METHODS AND STUDY DESIGN

[00480] A clinical study was performed to assess the safety, tolerability, and efficacy of long-term, daily use of gamma sensory stimulation therapy on cognition, functional ability, and biomarkers in a mild-to-moderate AD population via a prospective clinical study. The clinical study was a multi-center, randomized controlled trial evaluating daily gamma sensory stimulation received at home for a 6-month treatment period. Subjects included in the study were adults 50 years and older with a clinical diagnosis of mild to moderate AD (MMSE: 14- 26, inclusive), a reliable care partner, and successful tolerance and entrainment screening via EEG. Key exclusion criteria included profound hearing or visual impairment, use of memantine, major psychiatric illness, clinically relevant history of seizure, or contraindication to imaging studies. [00481] Study Participants and Design. A total of 135 patients were assessed for eligibility to participate in the study. Patients were first given a screening EEG, and then split into groups. One group was a sham control group that was not given treatment; the other was a group that was subjected to 1 hour of therapy, which involved subjecting the subject to audio and visual stimulation at a frequency of 40 Hz per day. Of those assessed for eligiblity, 76 were randomized between the active treatment and sham control. Forty-seven of the randomized patients were allocated to the active group and 29 were allocated to the sham group. Of the active group, two patients withdrew prior to therapy and three had no post baseline efficacy and were not included in the modified intent to treat (mITT) population. In sham group, one patient received active treatment and was not in the sham population. Completers included 33 patients in the active group and 28 in the sham group, with 10 early discontinuations in the active group. Seven of those discontinuations were due to consent withdraw and 23 were attributed to adverse events, whereas in the sham group, only six withdrew consent and one discontinued as a result of adverse events.

[00482] The study employed various clinical outcome assessment scales to assess cognitive decline or dysfunction. These included the the Neuropsychiatric Inventory (NPI), Clinical Dementia Rating-Sum of Boxes (CDR-sb), the Clinical Dementia Rating-Global Score (CDR global), the Mini-Mental State Exam (MMSE), the Alzheimer’s Disease Assessment Scale - Cognitive Subscale-14 (ADAS-Cogl4), and a variation of the Alzheimer’s Disease Composite Score (ADCOMS) as optimized for patients with mild or moderate Alzheimer’s Disease. NPI examines 12 sub-domains of behavioral functioning: delusions, hallucinations, agitation/aggression, dysphoria, anxiety, euphoria, apathy, disinhibition, irritability/lability, and aberrant motor activity, night-time behavioral disturbances, and appetite and eating abnormalities. The NPI can be used to screen for multiple types of dementia, and it involves giving the caregiver of a subject the questions and then, based on the answers, rating the frequency of the symptoms, their severity, and the distress the symptoms cause on a three, four, and five-point scale, respectively.

[00483] CDR global is calculated based on testing performed for six different cognitive and behavioral domains: memory, orientation, judgment and problem solving, community affairs, home and hobbies performance, and personal care. To test these areas, an informant is given a set of questions about a subject’s memory problem, judgment and problem-solving ability of the subject, community affairs of the subject, home life and hobbies of the subject, and personal questions related to the subject. The subject is given another set of questions that includes memory-related questions, orientation-related questions, and questions about judgment and problem-solving ability. The CDR global score is calculated based on the results of those questions, and it is measured using a scale of 0 to 3, with 0 representing no dementia, 0.5 indicating very mild dementia, 1 indicating mild dementia, 2 indicating moderate dementia/cognitive impairment, and 3 indicating severe dementia/cognitive impairment. CDR-sb is a clinical outcome assessment that looks at functional impact of cognitive impairment: memory, executive function, instrumental and basic activities of daily living and assesses them based on interviews with an informant and the patient. The CDR-sb score is based on assessment of items including memory, orientation, judgment and problem solving, community affairs, home and hobbies, and personal care. The CDR-sb is scored from 0 to 18, with higher scores representing greater severity of cognitive and functional impairment. [00484] The MMSE looks at 11 items to assess memory, language, praxis and executive function based on a cognitive assessment of the patient. Items assessed include registration, recall, constructional praxis, attention and concentration, language, orientation time, and orientation place. The MMSE is scaled from 0 to 30, with higher scores representing lower severity of cognitive dysfunction. The ADAS-Cogl4 assesses memory, language, praxis and executive function. The score is based on a cognitive assessment of the patient and assesses fourteen items: spoken language, maze, comprehension spoken language, remembering word recognition test instructions, ideational praxis, commands, naming, word finding difficulty, constructional praxis, orientation, digit cancellation, word recognition, word recall, and delayed recall. A score is based on points allocated to each item, and the maximum total score is 90, with higher numbers indicating greater severity of cognitive dysfunction. The Alzheimer’s Disease Composite Score (ADCOMS) considers items from all of the above- discussed scores: items from Alzheimer’s Disease Assessment Scale-cognitive subscale items, MMSE items, and all of the CDR-sb items. ADCOMS combines portions of the ADAS-cog, Clinical Dementia Rating (CDR) scale, and MMSE that have been shown to change the most over time in people who do not have functional impairment yet. MADCOMS, which was used in the present example, optimizes the scale instead by combining items more significant for mild and moderate dementia.

[00485] The study design involved primary efficacy endpoints of MADCOMS, ADAS- cogl4, and CDR-sb. Unlike ADCOMS, MADCOMS is optimized for patients with moderate or mild Alzheimer’s Disease. These were optimized for AD-specific decline. A separate optimization was done for moderate and mild AD. Secondary efficacy endpoints consisted of ADCS-ADL, ADCOMs (adjusted), MMSE, CDR-global score and the Neuropsychiatric Inventory (NPI). Of the secondary endpoints, ADCS-ADL was measured monthly and MMSE was measured at the last time point.

[00486] The efficacy endpoints were analyzed by applying a linear model of analysis and/or a separate means model of analysis. The linear model of analysis involved employing a linear fit model to determine a value at TO based on the difference from baseline in conditions at the end of the study. The seprate means analysis employed estimates of mean values at each assess timepoint, which was either a monthly timepoint or at three and six months after treatment began, depending on the score that was being analyzed. In evaluating MADCOMS composite score, for example, the separate means analysis was applied using mean values that were estimated at three and six months. The linear model was applied by using the estimates of treatment difference at the end of the study and connecting a straight line to 0. Similar models were used for the other efficacy endpoints. FIG. 20, 21, 22, 23, and 24 show the various linear and separate means models generated for these endpoints.

[00487] To assess biomarkers, researches used MRI, volumetric analyses, EEG, Amyloid positron emission tomography (PET), actigraphy, and plasma biomarkers. The study employed structural MRIs, taken before any treatment began and at the end of the sixth months and assessed these for volume-base morphology. Volumetric changes for the hippocampus, lateral ventricles, whole cortex (cerebral cortical gray matter) and whole brain (cerebrum and cerebellum) were determined and the rate of atrophy was compared for active and sham groups using a linear model, as demonstrated in FIG. 25. To analyze for safety and tolerability, researches looked for adverse events and presence of amyloid related imaging abnormalities (ARIA) on MRI. Therapy adherence was also analyzed. Blinding effectiveness for subjects, care partners, and assessors were prospectively analyzed by assessing baseline and follow up ascertainment of whether the care partner, assessor, or patient thought the patient was on active or sham treatment.

ANALYSIS

[00488] For the MADCOMS composite scores, both means of analysis demonstrated 35% slowing in decline rate, indicating that the active group progressed less than the placebo arm over the six-month study. When a linear and means analysis were both employed, the sham group was slightly favored, but non-significantly. When these two separate means analyses were applied to the ADAS-cogl4 data, both slightly favored the sham group, although not in a statistically significant manner. When CDR-sb results were analyzed, the mean-estimate model found a 28% slowing rate, whereas the linear extraction showed a 26% slowing rate, but the comparisons were not statistically significant.

[00489] Of the secondary endpoints, ADCS-ADL was measured monthly and MMSE was measured at the last time point. When analyzing ADCS-ADL values, the first analysis model employed used estimates for each month and showed 84% slowing over the 6-month time period. The linear fit model was again employed, and the same 84% slowing was found. When analyzing MMSE values, an 83% slowing was identified.

RESULTS

[00490] FIG. 19 and FIG. 26 summarize the efficacy findings of the study. Following informed consent and screening, a total of 76 subjects were randomized between the active treatment and sham control. The safety population for the study included 74 subjects who received at least one treatment, and the modified intent to treat (mITT) population included a total of 70 subjects, 53 of whom completed the 6-month study, which form the basis for analysis of outcome measures.

Demographic and baseline characteristics

[00491] In terms of demographic and baseline characteristics of the mITT population, following randomization, the populations were balanced across gender, baseline MMSE, ApoE4 status, activities of daily living (ADL), and PET amyloid standardized uptake value ratio (SUVR) status; imbalances between the two groups were observed in age, ADAS-Cogl 1, and CDR-sb scores at baseline. Statistical models included covariates for age and MMSE at baseline.

Safety and tolerability

[00492] Non-invasive gamma sensory stimulation was safe and well-tolerated in the mild and moderate AD subjects. The active group had a lower rate of treatment emergent adverse events (TEAE) than the sham group (67% vs 79%).

[00493] Treatment related AEs (TRAEs) deemed definitely, probably, and possibly related to the therapy were elevated in the active group versus the sham group (41% vs 32%). One treatment related SAE was noted in the active group for a patient hospitalized for wandering while their care partner was located; this subject discontinued the study subsequently. Of the randomized subjects, withdraw rates were similar between both groups (active 28%, sham 29%) including withdraw rates due to an adverse event (active 7%, sham 7%). TEAEs that occurred more often in the active group are tinnitus, delusions, broken bone. TEAEs that occurred more often in the sham group are upper respiratory infection, confusion, anxiety and dizziness.

Clinical assessments

[00494] Over the treatment period of 6-months, subjects were evaluated in-clinic and via phone visits for cognitive, functional, and biomarker changes on multiple measures. [00495] The primary efficacy endpoints demonstrated effects favoring the active group on the MADCOMS (35% slowing; n.s.) and CDR-sb (27%; n.s.) and favoring the Sham group on the ADAS-cogl4 (-15% slowing; n.s.). MADCOMS initially leaned in favor of active group, but the results were not statistically different. ADAS-cogl4 was slightly in favor of the sham group but not statistically different. CDR-sb was also in favor of the active group, but the difference was not significant, as shown by the p-values that ranged between 0.39 and 0.7920.

[00496] Selected secondary endpoints demonstrated significant effects favoring the treatment (active) group. The active group had significant benefit on functional ability as measured by the ADCS-ADL (p=0.0009), which represented an 84% slowing of decline and a treatment difference of 7.59 points over the six-month duration of the trial (FIG. 26). The active group demonstrated significant benefit on the MMSE (ANCOVA p=0.013), which represented an 83% slowing in the rate of decline versus the Sham group and a treatment difference of 2.42 points.

Biomarker changes - MRI

[00497] Structural MR imaging was analyzed for volume-base morphometry using an automated image processing pipeline (Biospective, Montreal, Canada). Volumetric changes of the hippocampus, lateral ventricles, whole cortex (cerebral cortical gray matter) and whole brain (cerebrum and cerebellum, no cerebrospinal fluid (CSF)) for each subject were determined; no manual corrections were performed. No significant benefit on hippocampal volume was determined. Statistically-significant benefit favoring the active group (p=0.0154) on whole brain volume (WBV) was established, representing a 61% slowing compared to the Sham group progression. The treatment value for the active group was 9.34 cm3.

CONCLUSIONS

[00498] Gamma sensory stimulation was safe and well tolerated. Two of three primary efficacy outcomes (MADCOMS, CDR-sb) favored the active group but did not reach significance. Selected secondary endpoints demonstrated that active treatment with gamma sensory stimulation therapy led to significant benefits in the ability to perform activities of daily living via the ADCS-ADL and cognition via the MMSE, representing important treatment and management objectives for AD patients. Quantitative MR analysis demonstrated slowing of brain atrophy as measured by whole brain volume in the active group. The combined clinical and biomarker findings suggest beneficial effects of gamma sensory stimulation for AD subjects may be facilitated via differentiated pathways. These surprising results indicates that the gamma sensory stimulation may be used to treat a range of diseases and disorders that cause or are caused by brain atrophy.

Example 2. Human Clinical Study to Determine Efficacy of NSS Treatment for Sleep Abnormal ties

METHODS

[00499] Study Participants and Design. Patients included in the present interim analy si s were clinically diagnosed having mild to moderate AD and were under the care of their care neurologist. Inclusion criteria were age of 55 years or older, MMSE score 14-26 and participation of a caregiver, whereas exclusion criteria included profound hearing or visual impairment, seizure disorder, use of memantine, or implantable, non-MR compatible devices. Patients on therapy with an acetylcholine esterase inhibitor could enroll, but their dosing were maintained the same during the trial. Patients were randomized to receive either 40 Hz simultaneous auditory and visual sensory stimulation by a NSS (treatment group; n=14) or placebo treatment (sham group; n=8).

[00500] Neural Stimulation System (NSS). In the present study, the system used for the neural stimulation provided noninvasive sensory stimulation provided visual and audio stimulation to invoke gamma oscillations in a brain region, thereby improvoing sleep. Use of such a system is referred to herein as NSS therapy or NSS treatment. The system logs device usage and stimulation output settings for adherence monitoring. This information is uploaded to a secure cloud server for physician remote monitoring. The present experiment utilized a NSS that included a handheld controller, an eye-set for visual stimulation, and headphones for auditory stimulation that work together to deliver precisely timed, non-invasive stimulation to induce steady-state gamma brainwave activity. The visual stimulation generated by the NSS consisted of precisely timed flashes of visible light from light emitting diodes, and the auditory stimulation consisted of short-duration clicks. The stimuli occured at a pulse repetition frequency of 40 Hz. The on-off periods of the visual stimulation were perceivable by the patient but not disruptive; an individual remained aware of their surroundings and could converse with a care partner during use of the system. The customized stimulation output was determined and verified by a physician based on both patient-reported comfort information and on the patient’s quantitative electroencephalography (EEG) response to the stimulation. The NSS was then configured to the determined settings, and all subsequent use would be within this predefined operating range.

[00501] Monitoring Sleep Fragmentation with Actigraphy and Signal Processing. Effects of the NSS therapy on sleep fragmentation were determined by continuous monitoring activity of AD patients with a wrist worn actigraphy watch (Acti Graph GT9X), and data was collected daily over a 6-month period. Collected data consisted of raw accelerometer readings in three orthogonal directions recorded at a 30Hz sampling frequency.

[00502] Preprocessing the Data. Accelerometer data from three orthogonal dimensions are filtered with a Butterworth bandpass (0.3-3.5Hz) filter. The magnitude of the bandpass filtered 3-d accelerometer vector was then down-sampled by a factor of 4. This process is done for all data collected from all patients over the six months period. Two representations of the data were made: (i) a binary representation and (ii) a smooth representation. For the binary representation, all data was pooled together and a histogram in the log scale was obtained. The resulting histogram had a bimodal distribution, one peak corresponding to higher changes in acceleration and hence high activity periods, and the second peak corresponding to lower changes in acceleration and hence rest periods. Taking the location of the minimum between the two peaks as a threshold, acceleration magnitudes higher than the threshold were represented by l’s and acceleration magnitudes smaller than the threshold were represented by 0’s. For the smooth representation, a median filter with length of six hours was applied to the down-sampled data to get a smooth estimate of the activity levels.

[00503] Extracting Nighttime (Sleep Segment). Individual 24-h data segments were extracted from 12:00 pm midday on a given day to the next day 12:00 pm midday. The data was labeled with the binary representation for an initial estimate of the active - l’s and rest - 0’s periods during the given 24-hour window. This window consisted of three segments: daytime (segment prior to sleep), nighttime (sleep segment) and daytime (segment after sleep). We proposed that the nighttime segment would consist of more 0’s than l’s and daytime segments would consist of more l’s than 0’s. Therefore, an ideal nighttime model was defined which was built with a function that takes a value 0 within continuous period of duration “L” centered at a time “T” with a value 1 outside this region. Given an initial estimate of L and T, the difference between the ideal nighttime model and the binary representation of movement was computed using a quadratic cost function. In this cost function each mismatch, occurring when the binary value is 1 during nighttime or 0 during daytime, contributes 1, and each match, occurring when the binary value is 0 during nighttime and 1 during daytime, contributes 0. The initial estimate for T was taken to be the time point corresponding to the minimum of the smooth representation mentioned above. Initial estimate for L is set to eight hours. Cost function was minimized using unconstrained nonlinear optimization. This led to the best model estimate for L, the nighttime length, and T, the nighttime mid-point, and allowed us to locate the borders for the three segments (daytime, nighttime, daytime) from the 24-hour window. We then extracted the nighttime segment to evaluate the micro-changes within.

[00504] Identification of Rest and Active Durations During Nighttime and Relating Them to Sleep. Within the nighttime segments, periods with all 0’s is attributable to lack of movement and periods with all l’s is attributable to movement. However, mapping these periods directly to sleep fragmentation faces the problem that the durations of these periods can range from milliseconds to hours in actigraphy data, whereas analysis of sleep is carried out by classifying non-overlapping epochs of 30 second duration into awake and asleep. To link our actigraphy analysis to the analysis time-scales used in sleep studies, all segments of length N were taken and replaced the values in those segments by the median value over a window of 3N duration centered on the segment. While N=30 s was chosen, it was found that the results were not sensitive to this exact choice. After repeating this process for all short segments, consecutive time points in the nighttime segments corresponding to 0’s were identified as rest durations and those corresponding to l’s were identified as active durations. [00505] Determining the Distributions of Rest and Active Durations. Rest durations across all participants were pooled and the quantity P(t) º J t ¥ p(w)dw, where p(w) is the probability density function of rest durations between w and w+dw, was examined. P(t) represents the fraction of rest durations that are greater than length t and is referred to as the cumulative distribution function. Similarly, the cumulative distribution of the active durations was also calculated, and distributions of both rest and active durations are displayed in FIG. 39

[00506] Assessment of Functional Ability. Activities of daily living were also assessed at baseline and regular monthly intervals during the 24-week treatment period in the same study population of actigraphy recordings using the clinically established ADCS- ADL scale (Galasko, D., D. Bennett, M. Sano, C. Ernesto, R. Thomas, M. Grundman and S. Ferris (1997). "An inventory to assess activities of daily living for clinical trials in Alzheimer's disease. The Alzheimer's Disease Cooperative Study." Alzheimer Pis Assoc Disord 11 Suppl 2: S33-39. The ADCS-ADL assesses the competence of AD patients in basic and instrumental activities of daily living. The assessments were by a caregiver in questionnaire format or administered by a healthcare professional as a structured interview with the caregiver. The six basic ADL items cover everyday activities, such as eating, personal grooming or dressing, also providing information on level of competence. The 16 instrumental ADL items ask the level of patient’s engagement with basic instruments, such as a phone or kitchen appliances. ADCS-ADL has been a critical instrument to standardize assessment in AD clinical trials and is used widely as a functional outcome measures in disease modifying trials.

[00507] Assessment Cognitive Function. Subject cognitive function was assessed by the Mini-Mental State Exam (MMSE), which is a widely used instrument of cognitive function in AD patients, it tests patients’ orientation, attention, memory, language, and visual- spatial skills.

[00508] Statistics. All statistical comparisons were done using Kolmogorov-Smimov test.

RESULTS

[00509] This interim analysis reports results on 22 mild-to-moderate AD subjects who successfully completed the 6-month study. Demographic and clinical characteristics of all patients during the initial assessment are shown in TABLE 3.

[00510] TABLE 3: Demographic and Clinical Characteristics of all Patients During the Initial Assessment

†Mini-Mental State Examination (MMSE) scores range between 0-30, higher scores indicating better cognitive performance.

J Alzheimer's Disease Cooperative Study - Activities of Daily Living (ADCS-ADL) scores range between 0-78, higher scores indicating better functioning.

Data on Safety & Adherence

[00511] Sleep Evaluated by Continuous Actigraphy Recordings. Outcomes from the NSS treatment on sleep were revealed from continually recorded actigraphy data and constructing a nighttime sleep model, which allowed to assess the durations of rest and active periods during sleep. Results from this analysis of a single patient are shown in FIG. 38. FIG. 38 demonstrate nighttime active and rest periods; the level of continuous activity is determined and indicated by the black tracing. Furthermore, intervals were identified as sleep for each night (represented by horizontal light gray bars), and the longest movement periods are indicated by the dark gray bars. All rest and active durations identified by actigraphy data processing were pulled and analyzed from each participant as described in Methods section, and the results were compared to published data of rest and active periods obtained by polysomnography-based sleep analysis. As evidenced by straight-line fits on a log-linear scale, the rest durations follow an exponential distribution, e A (-t/r) with t=10.15 min. In contrast, active durations follow power law distribution (straight-line fit on a log-log scale), t A (-a) with a=1.67 (FIG. 39). As demonstrated by FIG. 39, the cumulative distributions for pooled, nighttime, rest (gray) and active (black) durations show exponential and power law distributions, respectively. The X axes of FIG. 39 show the nighttime durations. The Y axes show the cumulative distributions obtained from 14736 hours of data from 23 patients and the solid lines show the best straight-line fits. Such exponential and power law behaviors have been observed in sleep studies of healthy subjects (Lo, C. C., N. A. L.A., S. Havlin, P. C. Ivanov, T. Penzel, J. H. Peter and H. E. Stanley (2002). "Dynamics of Sleep-Wake Transitions During Sleep." Europhys. Lett. 57(5): 625-631; Lo, C. C., T. Chou, T. Penzel, T. E. Scammell, R. E. Strecker, H.-E. Stanley and P. C. Ivanov (2004). "Common scale-invariant patterns of sleep-wake transitions across mammalian species." PNAS 101(50): 17545-17548; Lo, C. C., R. P. Bartsch and P. C. Ivanov (2013). "Asymmetry and Basic Pathways in Sleep-Stage Transitions." Europhys Lett 102(1): 10008.). These authors analyzed nighttime sleep and awake states as obtained from polysomnographic recordings of healthy subjects and found that cumulative distribution of sleep state durations is characterized by an exponential distribution whereas those of awake state durations were characterized with a power law distribution. Thus, the exponential decay constant as t=10.9 min for light sleep, t=12.3 min for deep sleep, t=9.9 min for REM sleep durations and the power law exponent as a=l.l for awake durations were reported (Lo, Bartsch et al. 2013). It was found that the nighttime rest and active durations, estimated from actigraphy recordings of Alzheimer’s disease patients show the same behavior as polysomnographic recordings of healthy subjects. Similarities in the form of the distributions between the results of the experiments described herein and previous work suggest that nighttime rest and active durations as assessed by actigraphy are analogous to sleep and awake states as assessed by polysomnography and that the effect of therapy on sleep may be indirectly assessed through its effect on active and rest durations. [00512] Effects of NSS Treatment on Sleep Quality Determined by Continuous Actigraphy Recordings. Effects of NSS treatment on sleep were determined by comparing the distribution of the length of nighttime uninterrupted rest durations in the first and the second 12-week periods of the study (FIG. 40). Only subjects who wore the actigraphy device for at least six weeks during both the first and last 12-week period were used for assessing efficacy of NSS treatment on sleep (N=7 Treatment, N=6 Sham). To avoid subjects with more data dominating comparisons across periods, the first six weeks of available data closest to the study start and the last six weeks closest to the study end were considered for each subject. Actigraphy recordings from a single patient in the treatment group are shown in FIG. 38, displaying during 5 subsequent nights prior and during treatment period. The X-axis of FIG. 38 shows the time of day, and the Y-axis shows the activity level (in log scale). The black tracings represent the continuous activity levels and the light gray horizontal bars represent the intervals identified as sleep in each night. The dark gray horizontal bars represent the longest movement periods within each night. The letters A through E correspond to five consecutive nights prior to treatment. The imposed curve shows a smooth (median filtered) activity level, with long movement intervals observed. Letters F through J correspond to five consecutive nights during treatment period. The imposed curve shows a smooth (median filtered) activity level. Compared to the pre-treatment period, patient showed fewer and shorter movement periods during treatment. In overall, nighttime active durations were significantly (p<0.03) reduced in the treatment group, whereas active durations were significantly (p<0.03) increased in patients of the sham group. Comparison of between treatment and sham groups were also done using normalized nighttime active durations. This normalization is done by dividing each active duration by the duration of the corresponding nighttime period. This measure eliminates potential variation in length of total sleep duration impacting numbers or durations of active periods. This analysis further confirmed opposite changes in nighttime active durations between treatment and sham groups. Changes in normalized active periods between the first and second 12-weeks period showed a significant (p<0.001) reduction in patients of the treatment group, in contrast to a significant increase (p<0.001) in patients of the sham group (FIG. 40). These findings demonstrate a reduction in nighttime active durations in response to NSS treatment, leading to reduction in sleep fragmentation and improvement in sleep quality, while the opposite can be assessed in the sham group.

[00513] Effects of NSS Treatment on Sleep Quality Determined by Continuous Actigraphy Recordings. MMSE changes were different in the treatment (n=13) and sham (n=8) groups. Initial assessment showed an MMSE value of 19.9±2.9, which did not change significantly during the duration of the treatment, showing an MMSE value of 19.3±3.4, measured at week 24. In contrast, the sham group showed the expected a significant decline in MMSE scored: initial score of 18.5±2.7 dropped to 16.8±5.7 (p<0.05).

[00514] Maintenance of Functional Ability Assessed by ADCS-ADL. Effects of NSS treatment on patients’ the ability to perform activities of daily living were assessed at baseline and regular monthly intervals during the 24-week treatment period using the clinically proven ADCS-ADL scale via structured interview with care partner. Average ADCS-ADL scores were calculated from the first 12-week and second 12-week periods in both treatment (n=14) and sham (n=8) groups (FIG. 41). The ADCS-ADL is a well-established instrument for testing function of mild to moderate AD patients, and numerous clinical trials have reported a significant decline in the ADCS-ADL scores in this patient population over a 6-month period (Loy, C. and L. Schneider (2006). "Galantamine for Alzheimer's disease and mild cognitive impairment." Cochrane Database Svst Rev(l): CD001747; Peskind, E. R., S. G. Potkin, N. Pomara, B. R. Ott, S. M. Graham, J. T. Olin and S. McDonald (2006). "Memantine treatment in mild to moderate Alzheimer disease: a 24-week randomized, controlled trial." Am J Geriatr Psychiatry 14(8): 704-715). In our study, each patient in the sham group showed a decline in ADCS-ADL scores, resulting in this patient group significant (p<0.001), approximately 3 points decline over the trail period. In contrast, 9 out of 14 patients in the treatment group maintained or even showed improvement in their ADCS-ADL scores. Therefore, the average ADSC-ADL score in the treatment group significantly (p<0.035) increased during the treatment period. Accordingly, FIG. 41 demonstrates that changes in daytime activities showed a significant improvement in the treatment group and a significant decline in the sham group.

DISCUSSION

[00515] This interim analysis of the Overture trial (NCT03556280) demonstrates a beneficial outcome of daily use of the NSS therapy over a six-month period in mild to moderate AD patients: NSS treatment resulted in improved sleep quality and maintained quality of daily living as compared to subjects in the control arm of the study.

[00516] Results, based on the collected actigraphy data over a 6-month period, demonstrate that NSS therapy can reduce sleep fragmentation, leading to significantly reduced active periods during night in mild to moderate AD patients. In contrast, patients in the sham group did not show improvement in sleep characteristics. Given the well-recognized architecture of human physiological sleep, consisting subsequent periods of different NREM stages starting from superficial to deep slow wave sleep and REM sleep period in a strictly subsequent order, it is obvious that sleep fragmentation can dramatically disrupts sleep architecture and consequently effectiveness of sleep. Sleep fragmentation, as a symptom of sleep disorders have multiple impact on human physiology, including dysfunction not only in the nervous system, but also overall health by impairing body metabolism or immune defense system. Nevertheless, decremental cognitive impacts of sleep abnormalities are particularly worrisome in MCI and AD patients. Therefore, application of NSS therapy offers novel intervention for in AD patients for improving sleep quality. Available clinical data revealed that this therapy is safe and can be applied daily in an extended period of time in patients. Considering that sleep disorders are contributing to impaired function and cognition, effective treatments for improving sleep quality potentially have multiple benefits in MCI and AD patients.

[00517] The clinical benefits of NSS therapy on sleep is particularly relevant, since pathomechanisms underlying sleep dysfunction in MCI and AD patients are not well understood, therefore developing specific sleep therapies are not feasible currently. AD- related pathological proteins, such as Ab- and tau- oligomers are known to disrupt sleep, though their mode of action is unknown. From an early stage of the disease brainstem ascending neurons considered to play in role in sleep-wake regulation, including cholinergic, serotoninergic and norepinephrine neurons show profound degeneration (Smith, M. T., C. S. McCrae, J. Cheung, J. L. Martin, C. G. Harrod, J. L. Heald and K. A. Carden (2018). "Use of Actigraphy for the Evaluation of Sleep Disorders and Circadian Rhythm Sleep-Wake Disorders: An American Academy of Sleep Medicine Systematic Review, Meta- Analysis, and GRADE Assessment." J Clin Sleep Med 14(7): 1209-1230; Tiepolt, S., M. Patt, G. Aghakhanyan, P. M. Meyer, S. Hesse, H. Barthel and O. Sabri (2019). "Current radiotracers to image neurodegenerative diseases." EJNMMI Radiopharm Chem 4(1): 17; Kang, S. S., X. Liu, E. H. Ahn, J. Xiang, F. P. Manfredsson, X. Yang, H. R. Luo, L. C. Liles, D. Weinshenker and K. Ye (2020). "Norepinephrine metabolite DOPEGAL activates AEP and pathological Tau aggregation in locus coeruleus." The Journal of Clinical Investigation 130(1): 422-437). Similarly, suprachiasmatic nucleus containing neurons playing the key role in regulating circadian rhythms also shows neurodegeneration early in the disease Van Erum, J., D. Van Dam and P. P. De Deyn (2018). "Sleep and Alzheimer's disease: A pivotal role for the suprachiasmatic nucleus." Sleep Med Rev 40: 17-27). There are only limited treatment options for sleep abnormalities in MCI and AD patients, and pharmacological treatments currently include antidepressant, antihistamines, anxiolytics, and sedative-hypnotic drugs such as benzodiazepines (Vitiello, M. V. and S. Borson (2001). "Sleep disturbances in patients with Alzheimer's disease: epidemiology, pathophysiology and treatment." CNS Drugs 15(10): 777-796; Deschenes, C. L. and S. M. McCurry (2009). "Current treatments for sleep disturbances in individuals with dementia." Curr Psychiatry Rep 11(1): 20-26; Ooms, S. and Y. E. Ju (2016). "Treatment of Sleep Disorders in Dementia." Curr Treat Options Neurol 18(9): 40). Some of the most frequently used anxiolytics/sedative-hypnotic drugs in the general clinical practice are GAB AA positive allosteric modulators, which are contraindicated in MCI and AD patients due to their negative effects on cognitive function, interference with motor behavior and addiction-forming profile. Recently, suvorexant, an orexin receptor antagonist has been approved as a sleep medication for AD patients having clinically diagnosed insomnia. The main effects of suvorexant are a prolonged total sleep time and delayed wake after sleep onset, without impacting sleep fragmentation or altering sleep architecture (Herring, W. J., P. Ceesay, E. Snyder, D. Bliwise, K. Budd, J. Hutzelmann, J. Stevens, C. Lines and D. Michelson (2020). "Polysomnographic assessment of suvorexant in patients with probable Alzheimer's disease dementia and insomnia: a randomized trial." Alzheimers Dement 16(3): 541-551). Non-pharmacological treatments include behavioral measures such as sleep hygiene education, exercise regimens, and reduction of noise during sleeping hours. Bright light therapy is one of the non-pharmacologic modalities that offers recommendations from the American Academy of Sleep Medicine for use in sleep disturbances due to circadian disorders. Clinical tests of light therapy in AD patients resulted in conflicting findings (Ouslander, J.G., Connell, B.R., Bliwise, D.L., Endeshaw, Y., Griffiths, P. and Schnelle, J.F. (2006). "A Nonpharmacological Intervention to Improve Sleep in Nursing Home Patients: Results of a Controlled Clinical Trial." Journal of the American Geriatrics Society. 54: 38-47; Deschenes et al., 2009), and currently no approved device or therapeutic intervention exists.

[00518] The current findings demonstrate a beneficial effect of NSS therapy in mild to moderate AD patients, prolonging nighttime undisturbed restful periods, indicating a reduced sleep fragmentation. There are no proved therapies for reducing sleep fragmentation which could improve sleep quality in MCI or AD patients, and frequently used sedative-hypnotic drugs are decremental on the physiological architecture of sleep. Having monthly interviews with patients and caregivers about everyday activities and sleep habits, there was not an indication that NSS treatment leads to daytime sleepiness or grogginess, which are typical side effects of most sleep medication, including the orexin receptor antagonist suvorexant. Furthermore, in the present trial clinically diagnosed sleep abnormality such as insomnia has not been a requirement, consequently beneficial effects of NSS treatment are not limited to AD patients suffering from clinically recognized sleep problems.

[00519] The present findings demonstrate that NSS treatment not only improves sleep quality but also helps to maintain functional ability reflected in activity of daily living in mild to moderate AD patients. Although some pharmacological treatments, such as the acetylcholine esterase inhibitor donepezil, delay decline in activity of daily living, currently there are no approved non-pharmacological therapies achieving this effect. Based on scientific and clinical observations demonstrating a close relationship between sleep quality and activity of daily living, it can be presumed that improving sleep quality in AD patients would provide multiple benefits: better sleep will enhance patients’ daytime performance, including cognitive function, and reduce daytime sleepiness. In line with this hypothesis, patients on NSS treatment maintained functional activity as reflected by their unchanged ADSC-ADL score over the six-month treatment period. In contrast, ADSC scores of sham group patients dropped similarly to changes of placebo group patients in clinical trials (Doody, R. S., R. Raman, M. Farlow, T. Iwatsubo, B. Vellas, S. Joffe, K. Kieburtz, F. He, X. Sun, R. G. Thomas, P. S. Aisen, C. Alzheimer's Disease Cooperative Study Steering, E. Siemers, G. Sethuraman, R. Mohs and G. Semagacestat Study (2013). "A phase 3 trial of semagacestat for treatment of Alzheimer's disease." N Engl J Med 369(4): 341-350; Doody, R. S., R. G. Thomas, M. Farlow, T. Iwatsubo, B. Vellas, S. Joffe, K. Kieburtz, R. Raman, X. Sun, P. S. Aisen, E. Siemers, H. Liu-Seifert, R. Mohs, C. Alzheimer's Disease Cooperative Study Steering and G. Solanezumab Study (2014). "Phase 3 trials of solanezumab for mild-to- moderate Alzheimer's disease." N Engl J Med 370(4): 311-321). Even though the close relationship between sleep and daily activity is well documented, it is unknown at present whether improved sleep quality is the main factor contributing to the maintenance of ADSC- ADL scores in NSS treated patients, or improvement in sleep and continuation of functional ability are unrelated positive outcomes from the therapy.

[00520] Currently, the underlying mechanisms of improved sleep and maintained functional ability of AD patients in response to GSS treatment are not known. Preclinical studies indicate that 40 Hz sensory stimulation reverses Ab and tau pathologies leading to improved cognitive function in transgenic mice (Iaccarino, Singer et al. 2016; Adaikkan, C., S. J. Middleton, A. Marco, P. C. Pao, H. Mathys, D. N. Kim, F. Gao, J. Z. Young, H. J. Suk, E. S. Boyden, T. J. McHugh andL. H. Tsai (2019). "Gamma Entrainment Binds Higher-Order Brain Regions and Offers Neuroprotection." Neuron 102(5): 929-943 e928; Martorell, Paulson et al. 2019). Although human AD-related biomarker studies are in progress, at the moment it is unknown if the same biochemical and neuroimmunology mechanisms are activated in AD patients as identified in mice. The bidirectional interaction between sleep and disease progression (Wang and Holtzman 2020) supports the notion that improved sleep in response to GSS treatment could also slow down disease progression.

CONCLUSION

[00521] The present study’s findings indicate that NSS treatment helps maintain everyday activity and quality of life of AD patients. Since measurements of both sleep fragmentation and ADCS-ADL were determined in the same patient cohort, the datas suggest a positive treatment effect of maintaining ability to complete daily activities in patients having improved sleep quality. NSS treatment consists of a non-invasive sensory stimulation; with exceptional safety profile, its long-term, chronic application is feasible. Expanded and longer trials will uncover additional clinical benefits and potentially disease-modifying properties of NSS treatment.

Example 3. Randomized Controlled Trial with Greater Amount of Participants BACKGROUND

[00522] An additional randomized controlled trial was performed, with patients maintaining the same methods and inclusion criteria as the interim analysis of the trial disclosed herein, in EXAMPLE 2. This trial involved a greater number of participants than that which was subject to the interim analysis.

METHODS

[00523] Patients with mild-to-moderate AD (MMSE 14-26, inclusive; n=74) were randomized to receive either 40Hz noninvasive audio-visual stimulation or sham stimulation over a 6-month period. Functional abilities of patients were measured by Alzheimer's Disease Cooperative Study - Activities of Daily Living (ADCS-ADL) scale at baseline and every four weeks during the study and follow-up period. Sleep quality was assessed from nighttime activities of a subgroup of patients (n=7 in treatment, n=6 in sham groups) who were monitored continuously via a wrist worn actigraphy watch throughout the 6-month period.

RESULTS

[00524] The sham group contained 19 patients, and the treatment group contained 33 patients. Adjusted ADCS-ADL scores from beginning and the end of the trial were compared in patients who completed the trial. Over the 6-month period, patients in the sham group (n=19) showed the expected decline, a 5.40-point drop in ADCS-ADL scores, whereas patients in the treatment group (n=33) receiving therapy exhibited only a 0.57-point decline. Changes in ADCS-ADL scores were statistically significant between the sham and treatment groups (P<0.01). Nighttime active durations in the treatment group were significantly (p<0.03) reduced in the second 3 months compared to the first 3-months but such durations increased in the sham group. To evaluate the impact on active durations, normalization is done by dividing duration of each active period by the duration of the matching entire nighttime period. Analysis of normalized active durations by the corresponding nighttime period of each patient further confirmed opposite changes in nighttime active durations between treatment and sham groups (p<0.001), with the treatment group experiencing reduced nighttime active durations, and the sham group experiencing increased nighttime active durations.

CONCLUSION

[00525] This trial confirmed that patients in gamma stimulation therapy maintained their activities of daily living and showed an improved sleep quality over a 6-month treatment period; two outcome measures, functional ability and sleep quality known to be strongly linked in AD. Maintenance of functional ability represents an important treatment and management goal for AD patients, reducing formal and informal care, and delaying time to institutionalization.