Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DYNAMIC VISUAL OPTIMIZATION
Document Type and Number:
WIPO Patent Application WO/2022/261031
Kind Code:
A9
Abstract:
Devices and methods capable of optimizing sensory inputs so as to allow observation of those sensory inputs, while ameliorating limits generally imposed by sensory processing limits or cognitive limits. Devices can include digital eyewear that detect problematic sensory inputs and adjust one or more of: (A) the sensory inputs themselves, (B) the user's receipt of those sensory inputs, or (C) the user's sensory or cognitive reaction to those sensory inputs. Detecting problematic sensory inputs can include detecting warning signals. Adjusting sensory inputs or user receipt thereof can include audio/video shading/inverse-shading, for luminance/loudness and particular frequencies, intermittent strobe presentation of objects, audio/video object recognition.

Inventors:
LEWIS SCOTT (US)
Application Number:
PCT/US2022/032407
Publication Date:
October 19, 2023
Filing Date:
June 06, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
PERCEPT TECH INC (US)
International Classes:
G06F1/16; G06F3/01
Attorney, Agent or Firm:
SWERNOFSKY, Steven (US)
Download PDF:
Claims:
Claims

1. A method of altering an image received by a user, the method including steps of receive a continuous image from an external source; adjusting the user’s receipt of the image using periodic alteration of the image, the periodic alteration of the image providing a sequence of discontinuous images; the sequence of discontinuous images, when viewed by the user, providing a virtual continuous image that limits an amount of cognitive or sensory processing needed by the wearer to obtain a clear view of the external source.

2. A method as in claim 1, wherein the sequence of discontinuous images provides a stroboscopic view of an object within the image.

3. A method as in claim 2, wherein the sequence of discontinuous images includes a sequence of still images having the effect of providing the wearer with a virtual moving image of the object.

4. A method as in claim 2, wherein the sequence of discontinuous images includes a sequence of moving images having the effect of providing the wearer with a virtual continuous moving image of the object.

5. A method as in claim 1 , wherein the steps of adjusting receipt of the image is responsive to a periodic signal, the periodic signal being responsive to an external source, and including one or more of: a fraction of time during which to perform shading/inverse-shading; a frequency at which the wearer’s receipt of the sensory input is periodically shaded/ inverse-shaded; or an amount of shading/ inverse-shading applied to the sensory input.

6. A method as in claim 5, wherein the periodic signal is responsive to a relative motion between the external source and the wearer.

7. A method as in claim 5, wherein the periodic signal is responsive to a relative speed of approach of the external source to the wearer.

8. A method as in claim 5, wherein the periodic signal is responsive to a relative speed of transit of the external source across the wearer’s field of view.

9. A method as in claim 1, wherein the received image is responsive to the external source moving object in the wearer’s field of view.

10. A method as in claim 1, wherein the steps of adjusting the received image are responsive to an amount of visual blur of the external source.

11. A method as in claim 1 , wherein the steps of adjusting the received image are responsive to an amount of glare in the background of the image.

12. A method as in claim 1, wherein the steps of adjusting the received image are responsive to an amount of visual noise in the background of the image.

13. A method as in claim 1, wherein the steps of adjusting the received image are responsive to one or more of: a size of the image, or a speed of the object, with respect to the background.

14. A method as in claim 1, wherein the steps of adjusting the received image use one or more of: shading/ inverse -shading with respect to the image; polarization of the image or the background of the image; adjusting one or more of: a color of the image or at least a portion of the background, a color balance of the image or at least a portion of the background, a difference between a color of the image and of at least a portion of the background, or a false color associated with the image or at least a portion of the background.

15. A method as in claim 1, wherein the steps of adjusting color include steps of using an electromagnetic frequency outside the normal range of human vision.

16. A method as in claim 15, wherein the electromagnetic frequency outside the normal range of human vision provides a phosphorescent effect.

17. A method as in claim 1, wherein the steps of adjusting the received image uses one or more of: a sound, a haptic or touch input, a smell, an electric charge or current, or an electromagnetic signal, in response to one or more of: a movement of the object with respect to the wearer or the background, a speed of the object with respect to the wearer or the background, a size of the object with respect to the wearer or the background, a distance of the object with respect to the wearer, a brightness of the object, a brightness of the object with respect to the background.

18. A method as in claim 17, including steps of coupling the sound, the haptic or touch input, the smell, the electric charge or current, or the electromagnetic signal, to a wearer in response to receipt of the received image.

19. A method as in claim 1, including steps of adjusting the received image with respect to an object using one or more of: an augmented reality or virtual reality presentation, the presentation providing information with respect to a location, a speed, a direction of motion, or a predicted landing spot, of the object with respect to a wearer.

20. A method as in claim 1, including steps of adjusting the received image with respect to an object using one or more of: an artificial intelligence, machine learning, predictive analytics, or statistical technique; the technique providing information about the object with respect to a wearer.

21. Digital eyewear including a sensory input disposed to be coupled to an external source; a computing device coupled to the sensory input, the computing device disposed to determine when the sensory input provides a sensory processing limit or a cognitive limit; a circuit disposed to adjust the wearer’s receipt of the sensory input, the circuit being disposed to periodically shade / inverse-shade the sensory input in response to a signal from the computing device; the signal representing one or more of: an amount of shading/inverse-shading, a fraction of time during which to perform shading/inverse-shading, a period.

22. Digital eyewear as in claim 21, wherein the periodic shading/inverse-shading is responsive to an excess auditory or visual complexity or noise in the wearer’s field of view.

23. Digital eyewear as in claim 22, wherein the excess auditory or visual complexity or noise is responsive to one or more of: unusual audio/video sensory patterns, fast-moving or otherwise quickly changing objects, in the wearer’s field of view.

24. Digital eyewear as in claim 21, wherein the periodic shading/inverse-shading is responsive to an excess brightness or loudness of the sensory input.

25. Digital eyewear as in claim 24, wherein the periodic shading/ inverse -shading is disposed to reduce an excess brightness or loudness of the sensory input using one or more of: a periodic amount of partial shading/ inverse -shading applied to the sensory input; a periodic amount of time during which the sensory input is allowed to reach the wearer; a periodic amount of time during which the sensory input is shaded / inverse-shaded; a periodic sequence of durations during which the sensory input is not shaded/in- verse-shaded.

26. Digital eyewear as in claim 21, wherein the periodic shading/ inverse -shading is responsive to a sudden change in brightness or loudness of the sensory input.

27. Digital eyewear as in claim 21, wherein the signal is responsive to an object in the wearer’s field of view.

28. Digital eyewear as in claim 21, wherein the signal is responsive to a direction of movement of the object.

29. Digital eyewear as in claim 21, wherein the signal is responsive to the wearer’s gaze direction.

30. Digital eyewear as in claim 21, wherein the signal is responsive to a background of an object in the wearer’s field of view.

31. Digital eyewear as in claim 30, wherein the signal is responsive to a total brightness or loudness of the background.

32. Digital eyewear as in claim 30, wherein the signal is responsive to a change in brightness or loudness of the background.

33. Digital eyewear as in claim 30, wherein the signal is responsive to a brightness or loudness of a portion of the background from a selected direction.

34. Digital eyewear as in claim 30, wherein the signal is responsive to a change in brightness or loudness of a portion of the background from a selected direction.

35. Digital eyewear as in claim 30, wherein the signal is responsive to an amount of auditory or visual complexity or noise in the background.

36. Digital eyewear as in claim 21, wherein the signal is responsive to a direction of an object with respect to the wearer’s field of view.

37. Digital eyewear as in claim 36, wherein the signal is responsive to whether the object is disposed in a frontal or peripheral portion of the wearer’s field of view.

38. Digital eyewear as in claim 36, wherein the signal is responsive to whether the object is disposed in a near or distant portion of the wearer’s field of view.

39. Digital eyewear as in claim 21, wherein the signal is responsive to a distance of an object in the wearer’s field of view.

40. Digital eyewear as in claim 21, wherein the signal is responsive to a speed of an object in the wearer’s field of view.

41. Digital eyewear including a sensory input disposed to receive an image from an external source; a computing device coupled to the sensory input, disposed to adjust the wearer’s receipt of the sensory input using periodic alteration of the sensory input; wherein the periodic alteration of the received image limits an amount of the image viewable by the wearer, whereby the periodic alteration limits an amount of cognitive or sensory processing needed by the wearer to obtain a clear view of the external source.

42. Digital eyewear as in claim 41, wherein the computing device is disposed to adjust the wearer’s receipt of the received image in response to a periodic signal, the periodic signal including one or more of: a fraction of time during which to perform shading/inverse-shading; a frequency at which the wearer’s receipt of the sensory input is periodically shaded/ inverse-shaded; or an amount of shading/ inverse-shading applied to the sensory input.

43. Digital eyewear as in claim 41, wherein the periodic signal is responsive to a relative motion between the external source and the wearer.

44. Digital eyewear as in claim 41, wherein the periodic signal is responsive to a relative speed of approach of the external source to the wearer.

45. Digital eyewear as in claim 41, wherein the periodic signal is responsive to a relative speed of transit of the external source across the wearer’s field of view.

46. Digital eyewear as in claim 41, wherein the computing device is disposed to adjust the wearer’s receipt of the received image in response to an image of a moving object in the wearer’s field of view.

47. Digital eyewear as in claim 41, wherein the computing device is disposed to adjust the wearer’s receipt of the received image in response to an amount of visual blur in an image of a moving object in the wearer’s field of view.

48. Digital eyewear as in claim 41, wherein the computing device is disposed to adjust the wearer’s receipt of the received image in response to a background of the image of the moving object.

49. Digital eyewear as in claim 41, wherein the computing device is disposed to adjust the wearer’s receipt of the received image in response to an amount of glare in the background of the image.

50. Digital eyewear as in claim 41, wherein the computing device is disposed to adjust the wearer’s receipt of the received image in response to an amount of visual noise in the background of the image.

51. Digital eyewear as in claim 41 , wherein the computing device is disposed to adjust the wearer’s receipt of the received image in response to one or more of: a size of the image, or a speed of the object, with respect to the background.

52. Digital eyewear as in claim 41, wherein the computing device is disposed to adjust the wearer’s receipt of the received image using one or more of: shading/ inverse -shading with respect to the image; polarization of the image or the background of the image; adjusting one or more of: a color of the image or at least a portion of the background, a color balance of the image or at least a portion of the background, a difference between a color of the image and of at least a portion of the background, or a false color associated with the image or at least a portion of the background.

53. Digital eyewear as in claim 41, wherein adjusting one or more of: a color of the image or at least a portion of the background, a color balance of the image or at least a portion of the background, a difference between a color of the image and of at least a portion of the background, or a false color associated with the image or at least a portion of the background, includes using an electromagnetic frequency outside the normal range of human vision.

54. Digital eyewear as in claim 53, wherein the electromagnetic frequency outside the normal range of human vision provides a phosphorescent effect.

55. Digital eyewear as in claim 41, wherein the computing device is disposed to adjust the wearer’s receipt of the received image with respect to an object using one or more of: a sound, a haptic or touch input, a smell, an electric charge or current, or an electromagnetic signal, associated with one or more of: a movement of the object with respect to the wearer or the background, a speed of the object with respect to the wearer or the background, a size of the object with respect to the wearer or the background, a distance of the object with respect to the wearer, a brightness of the object, a brightness of the object with respect to the background.

56. Digital eyewear as in claim 55, including a presentation input disposed to couple the sound, the haptic or touch input, the smell, the electric charge or current, or the electromagnetic signal, to the wearer in response to receipt of the received image.

57. Digital eyewear as in claim 41, wherein the computing device is disposed to adjust the wearer’s receipt of the sensory input with respect to an object using one or more of: an augmented reality or virtual reality presentation, the presentation providing information with respect to a location, a speed, a direction of motion, or a predicted landing spot, of the object with respect to the wearer.

58. Digital eyewear as in claim 41, wherein the computing device is disposed to adjust the wearer’s receipt of the sensory input with respect to an object using one or more of:

Ill an artificial intelligence, machine learning, predictive analytics, or statistical technique; the technique providing information about the object with respect to the wearer.

59. Digital eyewear as in claim 21, wherein the periodic shading/ inverse -shading provides a stroboscopic view of an object in the wearer’s field of view.

60. Digital eyewear as in claim 59, wherein the object in the wearer’s field of view is moving with respect to the wearer.

61. Digital eyewear as in claim 59, wherein the stroboscopic view includes a sequence of still images of the object.

62. Digital eyewear as in claim 61 , wherein the sequence of still images of the object provides the wearer with a virtual moving image of the object.

63. Digital eyewear as in claim 59, wherein the stroboscopic view includes a sequence of motion picture images of the object.

64. Digital eyewear as in claim 63, wherein the sequence of still images of the object provides the wearer with a virtual continuous moving image of the object.

65. Digital eyewear as in claim 21, wherein the periodic shading/inverse-shading provides a view of a moving object in the wearer’s field of view, wherein the view excludes from the wearer’s view, one or more of: motion blur, blur with respect to a difference between focused and unfocused fields of view, blur with respect to peripheral vision, blur with respect to focus on the moving object, or blur with respect to a difference between the moving object and a background.

66. Digital eyewear as in claim 21, wherein the periodic shading/inverse-shading provides a view of a moving object in the wearer’s field of view, when the view assists the wearer when engaged in one or more of: using an optical system, participating in or viewing a high-speed or other sporting event, participating in or viewing an entertainment or social event, conducting or reviewing a law enforcement or military operation, conducting or reviewing a firefighting, search/ rescue, or emergency responder operation, conducting or reviewing a teaching or supervisory operation with respect to a skilled activity, the skilled activity including one or more of: bomb defusal, combat training, contact sports, horseback riding, marksmanship, a medical procedure, operating heavy machinery, operating a vehicle, or experiencing a medical condition.

67. Digital eyewear including a sensory input disposed to receive an image from an external source; a computing device coupled to the sensory input, the computing device disposed to determine when the sensory input exceeds a sensory processing limit or a cognitive limit; a computing device being disposed to adjust the wearer’s receipt of the received image so as to limit an amount of the received image viewable by a wearer of the digital eyewear, wherein the adjustment limits an amount of cognitive or sensory processing needed by the wearer to obtain a clear view of the external source wherein the signal is responsive to entry or exit of the wearer with respect to an enclosed space.

68. Digital eyewear as in claim 67, wherein the enclosed space includes one or more of: a building, a room, a tunnel, a vehicle.

69. Digital eyewear as in claim 67, wherein the enclosed space includes one or more of: a building, a room, a tunnel, a vehicle; the signal is responsive to a warning device disposed near an entrance or an exit to the enclosed space.

70. Digital eyewear as in claim 21, wherein the computing device is disposed to adjust the wearer’s receipt of the received image in response to periodic shading/inverse-shading; wherein the periodic shading/inverse-shading is responsive to an audio/ video input to which human sensory inputs lack high precision.

71. Digital eyewear as in claim 21 , wherein the signal is responsive to audio/video inputs near the limits of human sensory detection.

72. Digital eyewear as in claim 21, wherein the signal is responsive to a warning signal from a potential source of audio/video input.

73. Digital eyewear as in claim 21, wherein the signal is responsive to an electromagnetic signal disposed to be coupled to the computing device.

74. Digital eyewear as in claim 21, wherein the wearer’s receipt of the received image is responsive to an activity the wearer is performing; the signal is responsive to a state of that activity.

75. Digital eyewear as in claim 74, wherein the wearer is performing an activity with respect to participating in or viewing a sporting event or public event; the signal is responsive to a state of that sporting event or public event.

76. Digital eyewear as in claim 74, wherein the wearer is conducting or reviewing an activity with respect to a law enforcement or military operation or event; the signal is responsive to a state of that law enforcement or military operation or event.

77. Digital eyewear as in claim 74, wherein the wearer is experiencing an initial phase, an ongoing phase, or a final phase, of a medical condition; the signal is responsive to a prediction, an evaluation, an analysis, a prescription, or a comparison with another medical condition, with respect to that medical condition.

78. Digital eyewear as in claim 74, wherein the wearer is operating a vehicle; the wearer’s receipt of the received image is responsive to an optical system associated with one or more of: operating the vehicle, performance of the vehicle; a state of the vehicle.

79. Digital eyewear as in claim 78, wherein the optical system includes one or more of: glasses, goggles, facemasks or helmets, or contact lenses; one or more cameras; mirrors; scopes or sights; screens, computer screens, smartphone or mobile device screens; or windows or windshields.

80. Digital eyewear as in claim 74, wherein the wearer is coaching, observing, or training, another person, with respect to an activity in which the presence of the wearer would interfere.

81. Digital eyewear as in claim 74, wherein the wearer is participating in an event including entertainment, interactive entertainment, a live-action event, a comedy or horror show, a theater event, or a role-playing event.

82. Digital eyewear including a sensory input disposed to receive an image from a potential source, the potential source being disposed to intentionally overload a wearer’s sensory processing limit or cognitive limit; an input disposed to receive a warning signal from the potential source, the warning signal being disposed to indicate when the overloading of the sensory processing limit or cognitive limit will occur; a computing device coupled to the sensory input and the second input, the computing device being disposed to periodically or randomly alter the received image in response to the second input; wherein the altered received image limits an amount of an unaltered received image viewable by the wearer, whereby the periodic or random alteration prevents an overload of a wearer’s sensory processing limit or cognitive limit.

83. Digital eyewear as in claim 82, wherein the warning signal is encrypted/obfuscated; the computing device is disposed to de-en crypt/ de-obfuscate the warning signal in response to a de-encryption/de-obfuscation key.

84. Digital eyewear as in claim 82, wherein the warning signal is encrypted/obfuscated; only those wearers of digital eyewear with the de-encryption/de-obfuscation key are able to prevent overload of cognitive or sensory processing needed by the wearer.

85. A system including digital eyewear having a sensory input disposed to be coupled to an external source; a circuit disposed to adjust the wearer’s receipt of the sensory input, the circuit being responsive to a warning signal from a potential source of audio /video input.

86. A system as in claim 85, wherein the potential source of audio/ video input includes law enforcement or military equipment.

87. A system as in claim 85, wherein the potential source is disposed to intentionally overload a sensory processing limit or cognitive limit.

88. A system as in claim 87, wherein the potential source is disposed to intentionally present an excess auditory or visual complexity or noise in the wearer’s field of view.

89. A system as in claim 87, wherein the potential source is disposed to intentionally present an excess brightness or loudness of the sensory input.

90. A system as in claim 85, wherein the warning signal is encrypted or obfuscated; the circuit is disposed to de-encrypt/de-obfuscate the warning signal in response to a de-encryption/de-obfuscation key.

91. A system as in claim 85, wherein the circuit is disposed to adjust the wearer’s response to the sensory input, the adjustment including one or more of: altering the wearer’s focal length, altering the wearer’s gaze direction, altering the wearer’s brightness reception.

92. A system as in claim 85, wherein the circuit is disposed to adjust the wearer’s receipt of the sensory input by shading/ inverse -shading brightness or loudness from a selected source.

93. A system as in claim 92, wherein the selected source includes an ambient light source; the circuit is responsive to a warning signal from near a region where the ambient light source is sufficiently blocked that the wearer’s sensory input is affected.

94. A system as in claim 85, wherein the circuit adjusts the wearer’s receipt of the sensory input when the circuit enters or exists an enclosed space.

95. A system as in claim 94, wherein the circuit is disposed to adjust the wearer’s receipt of the sensory input by shading/ inverse -shading brightness or loudness from a selected source.

96. A system as in claim 95, wherein the circuit is disposed to perform shading/ inverse- shading brightness or loudness in response to an amount by which the enclosed space affects the sensory input.

97. A system as in claim 95, wherein the circuit is disposed to perform shading/ inverse-shading brightness or loudness in response to a speed at which the wearer enters or exits the enclosed space.

98. A system as in claim 94, wherein the enclosed space includes a terrain feature that blocks an ambient light source by a sufficient amount that the wearer’s sensory input is affected; the circuit is responsive to a warning signal from near an entrance or exit from the enclosed space.

99. A system as in claim 98, wherein the circuit is disposed to adjust the wearer’s receipt of the sensory input in response to an amount by which the enclosed space affects the sensory input.

100. A system as in claim 98, wherein the circuit is disposed to adjust the wearer’s receipt of the sensory input in response to a speed at which the wearer enters or exits the enclosed space.

101. A system including digital eyewear having a sensory input disposed to be coupled to an external source; a circuit disposed to adjust the wearer’s receipt of the sensory input, the circuit being responsive to a warning signal; the warning signal being responsive to the digital eyewear’s upcoming entry into or exit from an enclosed space, whereby an ambient lighting source is blocked by a sufficient amount that the wearer’s sensory input is not affected.

102. A system as in claim 101, wherein the circuit is disposed to adjust the wearer’s receipt of the sensory input by shading/ inverse -shading brightness or loudness from a selected source not available within the enclosed space.

103. A system as in claim 102, wherein the circuit is disposed to perform shading/ inverse- shading brightness or loudness in response to an amount by which the enclosed space affects the sensory input.

104. A system as in claim 102, wherein the circuit is disposed to perform shading/ inverse-shading brightness or loudness in response to a speed at which the wearer enters or exits the enclosed space.

105. A system as in claim 101, wherein the enclosed space includes a terrain feature that blocks an ambient light source by a sufficient amount that the wearer’s sensory input is affected; the circuit is responsive to a warning signal from near an entrance or exit from the enclosed space.

106. A system as in claim 105, wherein the circuit is disposed to adjust the wearer’s receipt of the sensory input in response to an amount by which the enclosed space affects the sensory input.

107. A system as in claim 105, wherein the circuit is disposed to adjust the wearer’s receipt of the sensory input in response to a speed at which the wearer enters or exits the enclosed space.

108. A system including digital eyewear having a sensory input disposed to be coupled to an external source; a circuit disposed to adjust the wearer’s receipt of the sensory input, the circuit being responsive to a change in the external source; the circuit operating in response to the external source with a responsive time less than the wearer’s eye’s response time; whereby a change in the external source is blocked by a sufficient amount that the wearer’s sensory input is not affected.

109. A system as in claim 108, wherein the circuit is disposed to perform shading/ inverse-shading brightness or loudness in response to one or more of: a selected source, a brightness of an ambient environment, or an amount of glare infalling on the wearer’s eyes or ears.

110. A system as in claim 108, wherein the circuit is disposed to perform shading/ inverse-shading brightness or loudness in response to an amount by which the wearer’s eyes or ears are affected by the sensory input.

111. A system as in claim 108, wherein the circuit is disposed to perform shading/ inverse-shading brightness or loudness in response to a speed of the change in the external source.

112. Digital eyewear including a sensory input disposed to be coupled to an external source; a computing device coupled to the sensory input, the computing device disposed to determine when the sensory input provides a sensory processing limit or a cognitive limit; a circuit disposed to adjust the wearer’s receipt of the sensory input, the circuit being disposed to periodically block the sensory input in response to a signal from the computing device; the signal representing one or more of: a frequency for blocking the sensory input, a fraction of each period for blocking the sensory input, a duration for a period for blocking the sensory input, a technique for blocking the sensory input.

113. Digital eyewear as in claim 112, wherein the periodic blocking of the sensory input is responsive to an amount of excess auditory or visual complexity or noise.

114. Digital eyewear as in claim 113, wherein the excess auditory or visual complexity or noise is responsive to one or more of: unusual audio/video sensory patterns, fast-moving or otherwise quickly changing objects, in the wearer’s field of view.

115. Digital eyewear as in claim 112, wherein the periodic blocking of the sensory input is responsive to an excess brightness or loudness of the sensory input.

116. Digital eyewear as in claim 112, wherein the periodic blocking of the sensory input is responsive to a sudden change in brightness or loudness of the sensory input.

117. Digital eyewear as in claim 112, wherein the signal is responsive to recognition of a selected object in the wearer’s field of view.

118. Digital eyewear as in claim 117, wherein the signal is responsive to a direction of movement of the selected object.

119. Digital eyewear as in claim 112, wherein the signal is responsive to one or more of: the wearer’s gaze direction, the wearer’s focal length.

120. Digital eyewear as in claim 112, wherein the signal is responsive to a background of a selected object in the wearer’s field of view.

121. Digital eyewear as in claim 120, wherein the signal is responsive to a total brightness or loudness of the background.

122. Digital eyewear as in claim 120, wherein the signal is responsive to a change in brightness or loudness of the background.

123. Digital eyewear as in claim 120, wherein the signal is responsive to a brightness or loudness of a portion of the background from a selected direction.

124. Digital eyewear as in claim 120, wherein the signal is responsive to a change in brightness or loudness of a portion of the background from a selected direction.

125. Digital eyewear as in claim 120, wherein the signal is responsive to an amount of auditory or visual complexity or noise in the background.

126. Digital eyewear as in claim 112, wherein the signal is responsive to a direction of a selected object with respect to the wearer’s field of view.

127. Digital eyewear as in claim 126, wherein the signal is responsive to whether the selected object is disposed in a frontal or peripheral portion of the wearer’s field of view.

128. Digital eyewear as in claim 126, wherein the signal is responsive to whether the selected object is disposed in a near or distant portion of the wearer’s field of view.

129. Digital eyewear as in claim 112, wherein the signal is responsive to a distance of a selected object in the wearer’s field of view.

130. Digital eyewear as in claim 112, wherein the signal is responsive to a speed of a selected object in the wearer’s field of view.

131. Digital eyewear as in claim 112, wherein the periodic blocking of the sensory input provides a stroboscopic view of an object in the wearer’s field of view.

132. Digital eyewear as in claim 131, wherein the object in the wearer’s field of view appears to be moving with respect to the wearer.

133. Digital eyewear as in claim 131, wherein the stroboscopic view includes a sequence of still images of the object.

134. Digital eyewear as in claim 131, wherein the stroboscopic view includes a sequence of motion picture images of the object.

135. Digital eyewear as in claim 112, wherein the periodic blocking of the sensory input provides a view of a moving object in the wearer’s field of view, wherein the view excludes from the wearer’s view, one or more of: motion blur, blur with respect to a difference between focused and unfocused fields of view, blur with respect to peripheral vision, blur with respect to focus on the moving object, or blur with respect to a difference between the moving object and a background.

136. Digital eyewear as in claim 112, wherein the periodic shading/inverse-shading provides a view of a moving object in the wearer’s field of view, when the view assists the wearer when engaged in one or more of: using an optical system, participating in or viewing a high-speed or other sporting event, participating in or viewing an entertainment or social event, conducting or reviewing a law enforcement or military operation, conducting or reviewing a firefighting, search/ rescue, or emergency responder operation, conducting or reviewing a teaching or supervisory operation with respect to a skilled activity, the skilled activity including one or more of: bomb defusal, combat training, contact sports, horseback riding, marksmanship, a medical procedure, operating heavy machinery, operating a vehicle, or experiencing a medical condition.

137. Digital eyewear including a sensory input disposed to be coupled to an external source; a signal receiver disposed to determine a selected portion of the external source for which to adjust a wearer’s attention; a circuit disposed to adjust the wearer’s receipt of the sensory input, the circuit being responsive to the selected objects and when the signal indicates how to adjust the wearer’s attention.

138. Digital eyewear as in claim 137, wherein the circuit disposed to adjust the wearer’s receipt of the sensory input is disposed to adjust the wearer’s attention to improve accurate identification of one or more of: location, movement, speed, or direction, of an object or person in a sporting event; wherein a coach, observer, scout, or trainer, can more accurately determine a skill level of a player with respect to that sporting event.

139. Digital eyewear as in claim 138, wherein the sporting event includes one or more of: aircraft piloting or horseback dressage; automobile racing, horse racing, or skiing; archery, baseball, bowling, golf, handball, shooting, or tennis; basketball, football, jai alai, polo or water polo, roller skating, or soccer.

140. Digital eyewear as in claim 137, wherein adjusting the wearer’s receipt of the sensory input includes highlighting or suppressing at least part of the selected portion of the external source.

141. Digital eyewear as in claim 140, wherein highlighting or suppressing at least part of the selected portion of the external source includes one or more of: suppressing a part of the external source other than a selected object to which the wearer’s attention is to be drawn toward, suppressing a part of the external source to which the wearer’s attention is to be drawn away from.

142. Digital eyewear as in claim 141, wherein suppressing a part of the external source includes one or more of: periodically blocking the suppressed part, wherein the wearer’s eyes or ears receive the suppressed part only some of the time, or randomly blocking the suppressed part, wherein the wearer’s eyes or ears receive the suppressed part only some of the time.

143. Digital eyewear as in claim 142, wherein the circuit disposed to adjust the wearer’s receipt of the sensory input periodically blocks the suppressed part in response to one or more of: a frequency for blocking the suppressed part, a fraction of each period for blocking the suppressed part, a duration for a period for blocking the suppressed part, a technique for blocking the suppressed part.

144. Digital eyewear as in claim 140, wherein highlighting or suppressing at least part of the selected portion of the external source includes one or more of: reducing a brightness of part of the external source other than a selected object to which the wearer’s attention is to be drawn toward, reducing a brightness of part of the external source to which the wearer’s attention is to be drawn away from.

145. Digital eyewear as in claim 144, wherein reducing a brightness of part of the external source includes one or more of: periodically blocking the part reduced in brightness, wherein the wearer’s eyes or ears receive the part reduced in brightness only some of the time, or randomly blocking the suppressed portion, wherein the wearer’s eyes or ears receive the suppressed portion only some of the time.

146. Digital eyewear as in claim 145, wherein the circuit disposed to adjust the wearer’s receipt of the sensory input periodically blocks the part reduced in brightness in response to one or more of: a frequency for blocking the part reduced in brightness, a fraction of each period for blocking the part reduced in brightness, a duration for a period for blocking the part reduced in brightness, a technique for blocking the part reduced in brightness.

147. Digital eyewear as in claim 140, wherein highlighting or suppressing at least part of the selected portion of the external source includes one or more of: altering a color of part of the external source other than a selected object to which the wearer’s attention is to be drawn toward, altering a color of part of the external source to which the wearer’s attention is to be drawn away from.

148. Digital eyewear as in claim 147, wherein altering a color of part of the external source includes one or more of: replacing the color with another color, replacing the color with grey.

149. Digital eyewear as in claim 147, wherein altering a color of part of the external source includes one or more of: periodically replacing the part for which to alter color with another image, wherein the wearer’s eyes or ears receive the part for which to alter color only some of the time, or randomly replacing the part for which to alter color with another image, wherein the wearer’s eyes or ears receive the part for which to alter color only some of the time.

150. Digital eyewear as in claim 149, wherein the circuit disposed to adjust the wearer’s receipt of the sensory input periodically replaces the part for which to alter color in response to one or more of: a frequency for replacing the part for which to alter color, a fraction of each period for replacing the part for which to alter color, a duration for a period for replacing the part for which to alter color, a technique for replacing the part for which to alter color.

151. Digital eyewear as in claim 137, wherein the selected portion of the external source for which to adjust a wearer’s attention includes one or more of: one or more selected objects, or one or more selected portions of the wearer’s field of view.

152. Digital eyewear as in claim 137, wherein the selected portion of the external source for which to adjust a wearer’s attention includes one or more of: a selected activity, object, or vehicle, in a high-speed event; a selected ball, player, or sporting equipment in a sporting event; a selected activity, actor, background element, detail, expression, prop, or script element, in an entertainment event; or a selected person, group, activity, background element, detail, or expression, in a social event.

153. Digital eyewear as in claim 137, wherein the selected portion of the external source for which to adjust a wearer’s attention includes one or more of: a selected action, background element, detail, expression, foreground element, landing zone, machine, movement activity, object, person, target, threat, or vehicle, in a law enforcement or military operation; or a selected action, background element, detail, expression, foreground element, machine, movement activity, object, or person, in a firefighting, search /rescue, or emergency responder operation.

154. Digital eyewear as in claim 137, wherein the selected portion of the external source for which to adjust a wearer’s attention includes one or more elements in the wearer’s field of view for which the wearer has a cognitive disability.

155. Digital eyewear including a sensory input disposed to receive a continuous image from an external source; a computing device coupled to the sensory input, the computing device disposed to adjust receipt of the continuous image; wherein the computing device is disposed to periodically or randomly provide a sequence of discontinuous images, the sequence of discontinuous images, being integrable to provide a virtual continuous image when viewed; wherein the virtual continuous image improves visual acuity of a view of the external source.

156. Digital eyewear as in claim 155, wherein the virtual continuous image improves one or more of: static visual acuity, dynamic visual acuity, or peripheral visual acuity.

157. Digital eyewear as in claim 156, wherein the sequence of discontinuous images includes one or more of: a stroboscopic view of an object within the image; a sequence of still images, or a sequence of moving images, that provide the wearer with the virtual continuous image; the virtual continuous image including a moving image of the object.

158. Digital eyewear as in claim 155, wherein the computing device being disposed to adjust receipt of the continuous image is responsive to a periodic or random signal, the periodic or random signal indicating one or more of: a fraction of time during which to perform shading/inverse-shading, a frequency at which the wearer’s receipt of the sensory input is periodically shaded / inverse- shaded, an amount of shading/inverse-shading applied to the sensory input, or a technique by which to perform shading/inverse-shading.

159. Digital eyewear as in claim 158, wherein the periodic or random signal is responsive to one or more of: a relative motion of an object in the external source image, a relative speed of approach of the object in the external source image, a relative size the object in the external source image, a relative speed of transit of the object in the external source image.

160. Digital eyewear as in claim 158, wherein the computing device disposed to adjust receipt of the continuous image in response to one or more of: an object or background in a field of view of the external source image, an amount of glare associated with the object or background in the field of view, an amount of audio or visual blur associated with the object or background in the field of view, an amount of audio or visual noise associated with the object or background in the field of view, or a size or speed of the object with respect to the background in the field of view.

161. Digital eyewear as in claim 155, wherein the virtual continuous image provides a view of a moving object in a field of view of a wearer; whereby the view assists the wearer when engaged in one or more of: using an optical system, participating in or viewing a high-speed or other sporting event, participating in or viewing an entertainment or social event, conducting or reviewing a law enforcement or military operation, conducting or reviewing a firefighting, search/ rescue, or emergency responder operation, conducting or reviewing a teaching or supervisory operation with respect to a skilled activity, the skilled activity including one or more of: bomb defusal, combat training, contact sports, horseback riding, marksmanship, a medical procedure, operating heavy machinery, operating a vehicle, or experiencing a medical condition.

162. Digital eyewear as in claim 155, wherein the computing device is disposed to adjust receipt of the received image in response to one or more audio/video inputs to which human senses lack high precision.

163. Digital eyewear as in claim 155, wherein the computing device is disposed to adjust receipt of the continuous image in response to an activity being performed by a wearer; the computing device is disposed to provide the sequence of discontinuous images in response to a state of that activity.

164. Digital eyewear as in claim 163, wherein the wearer is performing an activity with respect to participating in or viewing one or more of: a sporting event or public event; a law enforcement or military operation or event; or an entertainment event, an interactive entertainment event, live-action event, a comedy or horror show, a theater event, or a role-playing event; one or more of the sequence of discontinuous images is responsive to a state of that event.

165. Digital eyewear as in claim 163, wherein the wearer is experiencing an initial phase, an ongoing phase, or a final phase, of a medical condition; the signal is responsive to a prediction, an evaluation, an analysis, a prescription, or a comparison with another medical condition, with respect to that medical condition.

166. Digital eyewear as in claim 163, wherein a wearer is operating a vehicle; one or more of the sequence of discontinuous images is responsive to an optical system associated with one or more of: operating the vehicle, performance of the vehicle; or a state of the vehicle.

167. Digital eyewear as in claim 166, wherein the optical system includes one or more of: glasses, goggles, facemasks or helmets, or contact lenses; one or more cameras; mirrors; scopes or sights; screens, computer screens, smartphone or mobile device screens; or windows or windshields.

168. Digital eyewear as in claim 163, wherein the wearer is coaching, observing, or training, another person, with respect to an activity in which the presence of the wearer would interfere.

169. Digital eyewear including a sensory input disposed to be coupled to an external source image; a computing device coupled to the sensory input, the computing device disposed to determine when the sensory input exceeds a sensory processing limit or a cognitive limit; wherein the eyewear is disposed to shade/inverse-shade a portion of the sensory input in response to a signal from the computing device; wherein the portion of the sensory input shaded/inverse-shaded by the circuit includes a sequence of periodically or randomly selected durations of the external source image.

170. Digital eyewear as in claim 169, wherein the signal represents one or more of: an amount of shading/inverse-shading, a fraction of time during which to perform shading/ inverse-shading, a period of time during which to perform shading/inverse-shading, a technique for performing shading/inverse-shading.

171. Digital eyewear as in claim 169, wherein the eyewear is disposed to shade/ inverse- shade the portion of the sensory input using one or more of: adjusting a luminance of the portion with respect to the image or a background of the image, or blocking input of luminance of the portion of the image or its background; polarization of the portion with respect to the image or its background; adjusting one or more of: a color of at least the portion with respect to the image or its background, a color balance of at least the portion with respect to the image or its background, a difference between a color of at least the portion with respect to the image or its background, or providing a false color associated with at least the portion of the image or its background.

172. Digital eyewear as in claim 169, wherein the periodic shading/ inverse-shading is responsive to one or more of: an excess auditory or visual complexity or noise in a field of view of the external source image; an excess auditory or visual complexity or noise responsive to one or more of: unusual audio/ video sensory patterns, or fast-moving or otherwise quickly changing objects, in a field of view of the external source image; or an excess brightness or loudness of the sensory input.

173. Digital eyewear as in claim 169, wherein the periodic shading/ inverse -shading provides a view of a moving object in a wearer’s field of view, wherein the view excludes from the wearer’s view, one or more of: motion blur, blur with respect to a difference between focused and unfocused fields of view, blur with respect to peripheral vision, blur with respect to focus on the moving object, or blur with respect to a difference between the moving object and a background.

174. Digital eyewear as in claim 169, wherein the signal is responsive to one or more of: recognition by the computing device of a selected object in a field of view; or recognition by the computing device of a direction of movement of the selected object; whether the selected object is disposed in a frontal or peripheral portion of the wearer’s field of view; or whether the selected object is disposed in a near or distant portion of the wearer’s field of view.

175. Digital eyewear as in claim 169, wherein the signal is responsive to one or more of: a wearer’s gaze direction, a wearer’s focal length.

176. Digital eyewear including a sensory input disposed to be coupled to a continuous external source image; a computing device coupled to the sensory input, the computing device disposed to determine when the sensory input exceeds a sensory processing limit or a cognitive limit of a wearer; wherein the computing device is disposed to adjust the wearer’s receipt of information in response to the external source image, wherein the wearer receives a virtual image limited to within the sensory processing limit or a cognitive limit of the wearer, whereby the wearer is provided a clear view of the external source.

177. Digital eyewear as in claim 176, wherein wherein the computing device is disposed to adjust the wearer’s receipt of the received image so as to limit an amount of the received image viewable by a wearer of the digital eyewear, wherein the adjustment limits an amount of cognitive or sensory processing needed by the wearer to obtain a clear view of the external source.

178. Digital eyewear as in claim 176, wherein the computing device is disposed to adjust the received image with respect to an object in a field of view, using an augmented reality or virtual reality presentation, the presentation providing information with respect to one or more of: a location, a speed, a direction of motion, or a predicted landing spot, of the object with respect to the source image or a background thereof.

179. Digital eyewear as in claim 176, wherein the computing device is disposed to adjust the received image with respect to an object in a field of view, using one or more of: an artificial intelligence, machine learning, predictive analytics, or statistical technique; the technique providing information about the object with respect to a wearer.

180. Digital eyewear as in claim 176, wherein the computing device is disposed to adjust the received image with respect to an object in a field of view, in response to one or more of: a gaze direction of a wearer; an object in the wearer’s field of view; a direction of movement of the object; a direction of an object with respect to the wearer’s field of view; whether the object is disposed in a frontal or peripheral portion of the wearer’s field of view; whether the object is disposed in a near or distant portion of the wearer’s field of view; or a distance of an object in the wearer’s field of view.

181. Digital eyewear as in claim 176, wherein the computing device is disposed to adjust the received image with respect to an object in a field of view, in response to one or more of: a background of the object in a wearer’s field of view; a brightness or loudness of the background; a change in brightness or loudness of the background; a brightness or loudness of a portion of the background from a selected direction; a change in brightness or loudness of a portion of the background from a selected direction; or an amount of auditory or visual blur, complexity, or noise, in the background.

182. Digital eyewear as in claim 176, wherein the virtual image provides a view of a moving object in the wearer’s field of view, when the view assists the wearer when engaged in one or more of: using an optical system, participating in or viewing a high-speed or other sporting event, participating in or viewing an entertainment or social event, conducting or reviewing a law enforcement or military operation, conducting or reviewing a firefighting, search/ rescue, or emergency responder operation, conducting or reviewing a teaching or supervisory operation with respect to a skilled activity, the skilled activity including one or more of: bomb defusal, combat training, contact sports, horseback riding, marksmanship, a medical procedure, operating heavy machinery, operating a vehicle, or experiencing a medical condition.

183. Digital eyewear including a computing device coupled to a sensory input from an external source, the computing device disposed to adjust receipt of the sensory input; wherein the computing device is disposed to periodically or randomly provide a sequence of discontinuous images, the sequence of discontinuous images, being integrable to provide a virtual continuous image when viewed; wherein the virtual continuous image improves static, dynamic, or peripheral visual acuity, of a view of the external source.

184. Digital eyewear as in claim 183, wherein to periodically or randomly provide a sequence of discontinuous images, the computing device is disposed to perform one or more of: blocking the sensory input in response to a selected frequency, blocking the sensory input in response to a selected fraction of each one of a sequence of selected periods, blocking the sensory input in response to a selected duration of each one of a sequence of selected periods, blocking the sensory input in response to a selected technique.

185. Digital eyewear including a sensory input disposed to receive an image from an external source, the external source being disposed to intentionally overload a sensory / cognitive limit of a wearer; a computing device coupled to the sensory input, disposed to adjust the wearer’s receipt of the sensory input in response to the external source, or the sensory / cognitive limit; wherein the wearer’s receipt of the sensory input is limited in an amount of the sensory input that is viewable by the wearer, whereby the wearer’s receipt of the sensory input does not overload the sensory / cognitive limit of the wearer.

186. Digital eyewear as in claim 185, wherein the computing device is disposed to warn when the external source exceeds the sensory/ cognitive limit of the wearer; the computing device is disposed to, when it warns that the external source exceeds or is about to exceed the sensory/ cognitive limit of the wearer, adjust the wearer’s receipt of the sensory input in response to one or more of: the external source, the sensory/ cognitive limit, or the warning.

187. Digital eyewear as in claim 185, wherein the computing device is disposed to adjust the wearer’s receipt of the sensory input in response to one or more of: an audio /video input near a lower limit or an upper limit of human sensory response; a warning signal disposed to be coupled from a potential source of audio / video input; or an electromagnetic signal disposed to be coupled to the computing device.

188. Digital eyewear as in claim 185, wherein the computing device is disposed to adjust the wearer’s receipt of the sensory input in response to one or more of: an excess audio /video complexity or noise of the sensory input; an excessive or inadequate brightness or loudness of the sensory input; or a rapid or sudden change in brightness or loudness of the sensory input.

189. Digital eyewear as in claim 185, wherein the computing device is disposed to adjust the wearer’s receipt of the sensory input in response to one or more of: a selected direction or change in direction of the sensory input with respect to the digital eyewear; a selected location or change in location of the digital eyewear; a distance or change in distance of the digital eyewear with respect to a selected object.

190. Digital eyewear including a sensory input disposed to receive an image from an external source, the external source being disposed to intentionally overload a sensory processing limit or cognitive limit of a wearer; a computing device coupled to an input of a warning signal, the warning signal being disposed to identify the onset of an intentional overload of the sensory processing limit or cognitive limit of the wearer; wherein the computing device is disposed, in response to the warning signal, to adjust the wearer’s receipt of the sensory input by limiting the wearer’s receipt of the sensory input; wherein the wearer’s receipt of the sensory input is limited to an amount of the sensory input that does not overload the sensory processing limit or cognitive limit of the wearer.

191. Digital eyewear as in claim 190, wherein the warning signal is encrypted or obfuscated; the computing device is disposed to de-encrypt or de-obfuscate the warning signal in response to a de-encryption or de- obfuscation key.

192. Digital eyewear as in claim 190, wherein the warning signal is encrypted or obfuscated; only those wearers of digital eyewear whose digital eyewear has a de-encryption or de-obfuscation key corresponding to the warning signal are able to prevent overload of cognitive or sensory processing needed by the wearer.

193. Digital eyewear as in claim 190, wherein the external source includes one or more of: an explosive, law enforcement equipment, or military equipment.

194. Digital eyewear as in claim 190, wherein the external source is disposed to intentionally overload one or more of: an excess audio /video complexity or noise within a field of view or hearing range of one or more persons; an excess brightness or loudness of the sensory input; or a sensory processing limit or cognitive limit.

195. Digital eyewear as in claim 190, wherein the computing device is disposed to adjust the wearer’s receipt of the sensory input by shading/ inverse -shading audio/ video from one or more external sources.

196. Digital eyewear as in claim 190, wherein the computing device is disposed to adjust the wearer’s response to the sensory input, the adjustment including altering one or more of: the wearer’s brightness reception, the wearer’s focal length, the wearer’s gaze direction, or the wearer’s response to color frequencies.

197. A system including digital eyewear having a sensory input disposed to be coupled to an external source; the digital eyewear including a computing device disposed to adjust the wearer’s receipt of the sensory input, the computing device being responsive to a warning signal; the warning signal being responsive to the wearer’s upcoming entry/ exit with respect to a selected region, whereby an ambient audio /visual source is sufficiently altered in the selected region that the wearer’s sensory processing limit or cognitive limit with respect to that audio /visual source is affected by the entry/ exit.

198. A system as in claim 197, wherein the external source includes one or more of: an ambient audio/ visual source, construction equipment, the sun; the selected region is disposed to attenuate or block the effect of the external source.

199. A system as in claim 197, wherein the warning signal is disposed at or near an entrance / exit of the selected region, whereby the wearer’s upcoming entry/ exit with respect to the selected region is identifiable in response to location or velocity with respect to the selected region.

200. A system as in claim 197, wherein the warning signal is associated with the wearer’s upcoming entry/ exit with respect to an enclosed space; wherein the ambient audio / visual source is sufficiently altered in the enclosed space that wearer’s sensory processing limit or cognitive limit with respect to that audio/ visual source is affected by the entry / exit.

201. A system as in claim 200, wherein the circuit is disposed to adjust the wearer’s receipt of the sensory input when the wearer enters/ exists the enclosed space.

202. A system as in claim 200, wherein the circuit is disposed to adjust the wearer’s receipt of the sensory input by shading/ inverse -shading brightness or loudness from a selected source.

203. A system as in claim 200, wherein the circuit is disposed to perform shading/ inverse-shading brightness or loudness in response to one or more of: an amount by which the enclosed space affects the sensory input; or a speed at which the wearer enters/ exits the enclosed space.

204. A system as in claim 200, wherein the enclosed space includes a terrain feature that blocks an ambient light source by a sufficient amount that the wearer’s sensory input is affected; the circuit is responsive to a warning signal from near an entrance or exit from the enclosed space.

205. A system as in claim 204, wherein the circuit is disposed to adjust the wearer’s receipt of the sensory input in response to an amount by which the enclosed space affects the sensory input.

206. A system as in claim 204, wherein the circuit is disposed to adjust the wearer’s receipt of the sensory input in response to a speed at which the wearer enters or exits the enclosed space.

207. A system as in claim 200, wherein the circuit is disposed to adjust the wearer’s receipt of the sensory input by shading/ inverse -shading brightness or loudness from a selected source not available within the enclosed space.

208. A system as in claim 200, wherein the circuit is disposed to adjust the wearer’s receipt of the sensory input by shading/ inverse -shading brightness or loudness in response to one or more of: an amount by which the enclosed space affects the sensory input; or a speed at which the wearer enters/ exits the enclosed space.

209. Digital eyewear as in claim 200, wherein the enclosed space includes one or more of: a building, a room, a tunnel, a vehicle.

210. A system including digital eyewear having a sensory input disposed to be coupled to an external source; a circuit disposed to adjust the wearer’s receipt of the sensory input, the circuit being responsive to a change in the external source; the circuit operating in response to the external source with a responsive time less than the wearer’s eye’s response time; whereby a change in the external source is blocked by a sufficient amount that the wearer’s sensory input is not affected.

211. A system as in claim 210, wherein the circuit is disposed to perform shading/ inverse-shading of brightness or loudness in response to one or more of: a selected source, a brightness of an ambient environment, or an amount of glare infalling on the wearer’s eyes or ears.

212. A system as in claim 210, wherein the circuit is disposed to perform shading/ inverse-shading of brightness or loudness in response to one or more of: an excess audio /video complexity or noise of the sensory input; an excessive or inadequate brightness or loudness of the sensory input; or a rapid or sudden change in brightness or loudness of the sensory input.

213. A system as in claim 210, wherein the circuit is disposed to perform shading/ inverse-shading of brightness or loudness in response to one or more of: an amount by which the wearer’s eyes or ears are affected by the sensory input; or a speed of the change in the external source.

214. A system as in claim 210, wherein the circuit is disposed to perform shading/ inverse-shading brightness or loudness in response to a sudden change in brightness or loudness of the sensory input.

215. Digital eyewear including a sensory input disposed to be coupled to an external source; a signal receiver disposed to determine a selected portion of the external source for which to adjust a wearer’s attention; a computing device disposed to adjust the wearer’s receipt of the sensory input, the computing device being responsive to the selected objects and to the signal indicating how to adjust the wearer’s attention.

216. Digital eyewear as in claim 215, wherein the computing device is disposed to adjust the wearer’s receipt of the sensory input so as to direct the wearer’s attention to a selected portion thereof.

217. Digital eyewear as in claim 215, wherein the computing device is disposed to adjust the wearer’s receipt of the sensory input to improve accurate identification of one or more of: a location, a movement, a speed, or a direction, of an object or person in a sporting event, a law enforcement or military operation or event, or an emergency responder or search / rescue operation or event.

218. Digital eyewear as in claim 215, wherein the computing device is disposed to adjust the wearer’s receipt of the sensory input to improve accurate identification of one or more of: a location, a movement, a speed, or a direction, of an object or person in a sporting event, a law enforcement or military operation or event, or an emergency responder or search / rescue operation or event; wherein a coach, observer, scout, or trainer, can more accurately determine a skill level of a player with respect to that sporting event.

219. Digital eyewear as in claim 218, wherein the sporting event includes one or more of: aircraft piloting or horseback dressage; automobile racing, horse racing, or skiing; archery, baseball, bowling, golf, handball, shooting, or tennis; basketball, football, jai alai, polo or water polo, roller skating, or soccer.

220. Digital eyewear as in claim 215, wherein the computing device is disposed to adjust the wearer’s receipt of the sensory input to include highlighting or suppressing at least a part of the selected portion of the external source.

221. Digital eyewear as in claim 220, wherein the computing device is disposed to adjust the wearer’s receipt of the sensory input to include highlighting or suppressing at least a part of the selected portion of the external source, including one or more of: suppressing a part of the external source other than a selected object to which the wearer’s attention is to be drawn toward, or suppressing a part of the external source to which the wearer’s attention is to be drawn away from.

222. Digital eyewear as in claim 220, wherein the computing device is disposed to adjust the wearer’s receipt of the sensory input to include suppressing at least a part of the selected portion of the external source, including one or more of: periodically or randomly blocking the suppressed part, wherein the wearer’s eyes or ears receive the suppressed part only some of the time.

223. Digital eyewear as in claim 220, wherein the computing device is disposed to adjust the wearer’s receipt of the sensory input so as to periodically block the suppressed part in response to one or more of: a frequency for blocking the suppressed part, a fraction of each one of a sequence of periods for blocking the suppressed part, a duration for each of those periods for blocking the suppressed part, a technique for blocking the suppressed part.

224. Digital eyewear as in claim 220, wherein the computing device is disposed to adjust the wearer’s receipt of the sensory input to include highlighting or suppressing at least a part of the selected portion of the external source, including one or more of: reducing a brightness of part of the external source other than a selected object to which the wearer’s attention is to be drawn toward, reducing a brightness of part of the external source to which the wearer’s attention is to be drawn away from.

225. Digital eyewear as in claim 215, wherein the computing device is disposed to adjust the wearer’s receipt of the sensory input to include reducing a brightness of at least a part of the external source, including periodically or randomly blocking the part reduced in brightness, wherein the wearer’s eyes or ears receive the part reduced in brightness only some of the time.

226. Digital eyewear as in claim 225, wherein the computing device is disposed to adjust the wearer’s receipt of the sensory input to include periodically or randomly blocking at least a part of the external source, including in response to a frequency for blocking the part reduced in brightness, a fraction of each one of a sequence of periods for blocking the part reduced in brightness, a duration for one or more of those periods for blocking the part reduced in brightness, a technique for blocking the part reduced in brightness.

227. Digital eyewear as in claim 215, wherein the computing device is disposed to adjust the wearer’s receipt of the sensory input to include highlighting or suppressing at least a part of the selected portion of the external source, including one or more of: altering a color of part of the external source other than a selected object to which the wearer’s attention is to be drawn toward, altering a color of part of the external source to which the wearer’s attention is to be drawn away from.

228. Digital eyewear as in claim 227, wherein altering a color of part of the external source includes one or more of: replacing the color with another color, replacing the color with a shade of grey.

229. Digital eyewear as in claim 227, wherein altering a color of part of the external source includes one or more of: periodically replacing the part for which to alter color with another image, wherein the wearer’s eyes or ears receive the part for which to alter color only some of the time, or randomly replacing the part for which to alter color with another image, wherein the wearer’s eyes or ears receive the part for which to alter color only some of the time.

230. Digital eyewear as in claim 227, wherein the computing device is disposed to adjust the wearer’s receipt of the sensory input by periodically replacing a part of the sensory input so as to alter a color thereof, in response to one or more of: a frequency for replacing the part for which to alter color, a fraction of each of a sequence of periods for replacing the part for which to alter color, a duration for one or more of those periods for replacing the part for which to alter color, a technique for replacing the part for which to alter color.

231. Digital eyewear as in claim 215, wherein the selected portion of the external source for which to adjust a wearer’s attention includes one or more of: one or more selected objects recognized by the computing device, or one or more selected portions of the wearer’s field of view.

232. Digital eyewear as in claim 215, wherein the selected portion of the external source for which to adjust a wearer’s attention includes one or more of: a selected activity, object, or vehicle, in a high-speed event; a selected ball, player, or sporting equipment in a sporting event; a selected activity, actor, background element, detail, expression, prop, or script element, in an entertainment event; or a selected person, group, activity, background element, detail, or expression, in a social event.

233. Digital eyewear as in claim 215, wherein the selected portion of the external source for which to adjust a wearer’s attention includes one or more of: a selected action, background element, detail, expression, foreground element, landing zone, machine, movement activity, object, person, target, threat, or vehicle, in a law enforcement or military operation or event; or a selected action, background element, detail, expression, foreground element, machine, movement activity, object, or person, in a firefighting, search /rescue, or emergency responder operation or event.

234. Digital eyewear as in claim 215, wherein the selected portion of the external source for which to adjust a wearer’s attention includes one or more elements in the wearer’s field of view for which the wearer has a cognitive, observational, or sensory disability.

235. Digital eyewear including a computing device coupled to an external source, the computing device disposed to determine when the external source provides a sensory processing limit or a cognitive limit; a computing device disposed to adjust receipt of the external source, the computing device being disposed to block the external source in response to a periodic or random effect.

236. Digital eyewear as in claim 235, wherein the periodic or random effect indicates one or more of: a frequency of blocking the sensory input, a fraction of each period of blocking the sensory input, or a duration of each period of blocking the sensory input, a technique blocking the sensory input.

237. Digital eyewear as in claim 235, wherein when the computing device blocks the external source in response to a periodic or random effect, the computing device provides a virtual continuous image including a sequence of discontinuous images, the virtual continuous image being limited to a cognitive or sensory limit of a user; the virtual continuous image including one or more of: a sequence of discontinuous still images, or a sequence of discontinuous moving images.

238. Digital eyewear including a sensory input disposed to receive a continuous image from an external source; a computing device coupled to the sensory input, the computing device disposed to adjust receipt of the continuous image; wherein the computing device is disposed to periodically or randomly provide a sequence of discontinuous images, the sequence of discontinuous images being integrable to provide a virtual continuous image when viewed; wherein the virtual continuous image limits an amount of cognitive or sensory processing needed to obtain a view of the external source.

239. Digital eyewear as in claim 238, wherein the selected portion of the external source for which to adjust a wearer’s attention includes one or more elements in the wearer’s field of view for which the wearer has a cognitive, observational, or sensory disability.

240. Digital eyewear as in claim 238, wherein the virtual continuous image improves an amount of cognitive or sensory processing of the view of the external source; the view of the external source includes a portion of the continuous image including a user’s peripheral or non-frontal view of the external source.

241. Digital eyewear as in claim 238, wherein the virtual continuous image improves visual acuity of the view of the external source.

242. Digital eyewear as in claim 241, wherein the virtual continuous image improves one or more of: static visual acuity, dynamic visual acuity, or peripheral visual acuity.

243. Digital eyewear as in claim 238, wherein the sequence of discontinuous images provides a stroboscopic view of an object within the image.

244. Digital eyewear as in claim 238, wherein the sequence of discontinuous images includes one or more of: a sequence of still images, or a sequence of moving images, that provide the wearer with the virtual continuous image; the virtual continuous image including a moving image of the object.

245. Digital eyewear including a sensory input disposed to receive a continuous image from an external source, wherein the continuous image represents one or more of: an object that is in relatively rapid motion with respect to a user, or is otherwise difficult to see when the user is looking directly at the object, an object that is primarily viewable by the user using their peripheral vision, or using another portion of the user’s vision that has a lesser degree of natural acuity, or an object that involves the user’s rapid reaction thereto; a computing device coupled to the sensory input, the computing device disposed to adjust receipt of the continuous image; wherein the computing device is disposed to periodically or randomly provide a sequence of discontinuous images, the sequence of discontinuous images being integrable by the user to provide a virtual continuous image when viewed; wherein the virtual continuous image improves the user’s ability to sense the external source without degrading the user’s audio or visual acuity with respect to the object.

246. Digital eyewear as in claim 245, wherein the sensory input includes a view of one or more of: information with respect to an emergency action, information with respect to a loss of an engine in aviation, a trajectory of a vehicle, or user equipment by another person.

247. Digital eyewear as in claim 245, wherein the sensory input includes a view of one or more of: a projectile, sports equipment, or terrain.

248. Digital eyewear as in claim 245, wherein the sensory input includes a view of one or more of: information with respect to operating a flying vehicle, operating a ground vehicle, operating a water vehicle, participating in a sport using relatively rapid sports equipment, or participating in an activity in which critical decisions are made rapidly or with little information.

249. Digital eyewear as in claim 248, wherein the flying vehicle includes one or more of: an aircraft, an ultralight aircraft, a glider, a hang-glider, or a helicopter.

250. Digital eyewear as in claim 248, wherein the ground vehicle includes one or more of: an automobile, a racing car, an automobile, a truck, an all-terrain vehicle, a camper or recreational vehicle, a motorcycle, a dirt bike, or a bicycle or unicycle.

251. Digital eyewear as in claim 248, wherein the water vehicle includes a speedboat, a motorboat, a sailboat, or a cigarette boat.

252. Digital eyewear as in claim 248, wherein the sport includes one or more of: baseball, basketball, football, field hockey, ice hockey, lacrosse, or soccer.

253. Digital eyewear as in claim 245, wherein the sensory input includes a view of one or more of: a circumstance in which audio or visual acuity is valuable to the viewer, a circumstance in which the user would gain substantially from enhanced audio or visual acuity.

254. Digital eyewear as in claim 245, wherein the sensory input includes a view of a field of view of one or more of: controlling a vehicle, performing in or observing a sports event, performing a law enforcement officer action, or conducting emergency care.

255. Digital eyewear as in claim 254, wherein the vehicle is one or more of: an aircraft, a watercraft, or a ground vehicle.

256. Digital eyewear as in claim 245, wherein the vehicle is a ground vehicle, and the sensory input includes information with respect to one or more of: an obstacle or other hazard, a traffic instruction, a limit on user sensory acuity or capacity, a limit on vehicle operation, an internal vehicle status, an amount of glare or a change in ambient lighting.

257. Digital eyewear as in claim 245, wherein the vehicle is a ground vehicle, and the sensory input includes audio or video information with respect to one or more of: entry into or exit from a relatively dark region or a tunnel or a relatively enclosed region.

258. Digital eyewear as in claim 257, wherein when an operator of the vehicle enters into or exits from a region with differing audio or video effects, the digital eyewear is disposed to alter the audio or video information presented to the operator upon entry into or exit from the region with differing audio or video effects.

259. Digital eyewear as in claim 245, wherein the sensory input involves an object moving relatively rapidly with respect to a vehicle being controlled by a user.

260. Digital eyewear as in claim 259, wherein the object includes one or more of: another aircraft, a building or tower, or ground terrain or a marker thereon.

261. Digital eyewear as in claim 245, wherein the vehicle is an aircraft, and the sensory input includes information with respect to one or more of: a compass direction or GPS location information, a radio or traffic beacon, a set of transponder data, weather or other atmospheric effects, a weather report or weather sighting, an updraft or downdraft, an oxygenation level of the aircraft cabin, a runway heading, or a terrain feature.

262. Digital eyewear as in claim 245, wherein the terrain feature includes one or more of: a height of a building or tower, a height of a hill or mountain, a glare or brightness effect from the sun or a reflection thereof, a glare or brightness effect from a body of water or a cloud cover, a glare or brightness effect from a reflecting surface, a brightness effect from climbing toward higher altitude.

263. Digital eyewear as in claim 245, wherein the vehicle is an aircraft, and the sensory input includes information with respect to one or more of: an air traffic control zone, an air traffic guideline, a controlled airspace, a defined airway travel path or designated air travel instruction, a glide path or glide path guideline, or a noise control guideline.

264. Digital eyewear as in claim 245, wherein the sensory input includes information presentable to a user using one or more of: a heads-up display, or an augmented reality or virtual reality environment.

265. Digital eyewear as in claim 245, wherein the sensory input includes information with respect to objects in a user’s field of view that are backlit.

266. Digital eyewear as in claim 245, wherein the sensory input includes information with respect to ground effects including a presence of one or more of: rapid road curves or other road changes, road upgrades or downgrades, road tilting, slippery portions of the road, banking of the road, other road gradients, speed bumps, changes in road surfaces, or terrain hazards.

267. Digital eyewear as in claim 266, wherein the terrain hazards include one or more of: deer or other wildlife crossing, falling rocks, possible flooding, persons crossing, or other persons or wildlife or objects on the road that might have an effect on driving conditions.

268. Digital eyewear as in claim 245, wherein the sensory input includes information with respect to a presence of ambient or upcoming weather, including one or more of: fog, mist, rain, wind, or other effects of current or upcoming precipitation; or lightning or thunder.

269. Digital eyewear as in claim 245, wherein the sensory input includes glare, excessive brightness, or inadequate brightness.

270. Digital eyewear as in claim 245, wherein the vehicle includes a water vehicle, and the sensory input includes audio or video information with respect to one or more of: underwater obstacles or surface obstacles.

271. Digital eyewear as in claim 245, wherein the sensory input includes a view of one or more of: information with respect to participation in, or observation of, a sporting event.

272. Digital eyewear as in claim 271, wherein the information relates to participation in a sporting event, and includes a location or movement of a baseball, basketball, football, hockey puck, jai alai ball, soccer ball, tennis ball, or other sporting equipment.

273. Digital eyewear as in claim 272, wherein the sporting equipment is difficult to see in response to background lighting, excessive lighting, inadequate lighting, or a visually noisy background.

274. Digital eyewear as in claim 272, wherein the sporting equipment is aimed at a target which is difficult to see in response to background lighting, excessive lighting, inadequate lighting, or a visually noisy background.

274. Digital eyewear as in claim 271, wherein the information relates to participation in a sporting event, and relates to a one or more of: a race, a rodeo event, a shooting event, a skiing event, or terrain conditions with respect to a race.

275. Digital eyewear as in claim 271, wherein the information relates to observation of a sporting event, and includes a location or movement of a baseball, basketball, football, hockey puck, jai alai ball, soccer ball, tennis ball, or other sporting equipment.

276. Digital eyewear as in claim 271, wherein the information relates to observation of a sporting event, and includes information available to one or more of: a coach, a scout, or a spectator.

277. Digital eyewear including a sensory input disposed to receive a continuous image from an external source, wherein the continuous image represents an object that involves a user’s rapid reaction thereto; a computing device coupled to the sensory input, the computing device disposed to adjust receipt of the continuous image; wherein the computing device is disposed to periodically or randomly provide a sequence of discontinuous images, the sequence of discontinuous images being integrable by the user to provide a virtual continuous image when viewed; wherein the virtual continuous image improves the user’s ability to sense the external source without degrading the user’s audio or visual acuity with respect to the object; wherein the user is involved in a rapid-response or a life-critical decision.

278. Digital eyewear as in claim 277, wherein the user includes one or more of: firefighting personnel, search/ rescue personnel, emergency responders, emergency room personnel, medical personnel, or law enforcement personnel.

279. Digital eyewear as in claim 278, wherein the user includes law enforcement personnel, and the sensory input includes information with respect to a shoot/ don’t- shoot decision.

280. Digital eyewear as in claim 278, wherein the user includes law enforcement personnel, and the sensory input includes information with respect to whether an object in the user’s field of view is likely to be a weapon or another possibly dangerous object.

281. Digital eyewear as in claim 278, wherein the computing device operates using an artificial intelligence or a machine learning technique to present the user with an evaluation of an amount of attention the user should provide to an object in the user’s field of view.

282. Digital eyewear as in claim 278, wherein the computing device operates using an artificial intelligence or a machine learning technique to present the user with information with respect to an identification of another person.

283. Digital eyewear as in claim 282, wherein the artificial intelligence or machine learning technique includes facial recognition.

284. Digital eyewear as in claim 278, wherein the computing device operates using an artificial intelligence or a machine learning technique to present the user with information with respect to whether another person is likely to exhibit dangerous or violent behavior.

285. Digital eyewear as in claim 278, wherein the computing device operates using an artificial intelligence or a machine learning technique to present the user with information with respect to whether another person is likely to manifesting violent emotion likely to lead to an armed confrontation.

286. Digital eyewear as in claim 278, wherein the user includes emergency responders or emergency room personnel, and the sensory input includes information with respect to a medical care decision.

287. Digital eyewear as in claim 286, wherein the computing device is disposed to provide information with respect to whether a patient is subject to a selected medical condition.

288. Digital eyewear as in claim 286, wherein the computing device is disposed to use an artificial intelligence or a machine learning technique to determine whether a patient is subject to a selected medical condition.

289. Digital eyewear as in claim 286, wherein the computing device is disposed to provide information with respect to whether a patient is subject to a negative factor regarding one or more of: airway, breathing, circulation, disability, or exposure.

290. Digital eyewear as in claim 286, wherein the computing device is disposed to use an artificial intelligence or a machine learning technique to determine whether a patient is subject to a negative factor regarding one or more of: airway, breathing, circulation, disability, or exposure.

291. Digital eyewear as in claim 286, wherein the computing device is disposed to provide information with respect to whether a patient is likely to have an allergic reaction to medical care.

292. Digital eyewear as in claim 278, wherein the user includes firefighting personnel or search/ rescue personnel, and the sensory input includes information with respect to whether a person or animal needs assistance.

293. Digital eyewear as in claim 292, wherein the computing device is disposed to provide information with respect to whether any potential victims are likely to be present in a hazardous zone.

294. Digital eyewear as in claim 292, wherein the computing device is disposed to provide information with respect to whether particular regions of a building or other structure remain sound and capable of carrying the weight of firefighting personnel.

295. Digital eyewear as in claim 292, wherein the computing device is disposed to provide information with respect to one or more of: a scope or severity of a hazardous zone, the presence of potential victims in that zone, the possibility of that zone threatening the structural integrity of a building or other structure.

296. Digital eyewear as in claim 292, wherein the computing device is disposed to use an artificial intelligence or machine learning technique to provide information with respect to one or more of: a heated region or other hazardous zone, to identify sensory input corresponding to an animal or a person or calls for help, to identify audio/video information corresponding to a relatively weakened building or other structure, to identify likely safe routes of travel within the zone.

297. Digital eyewear as in claim 278, wherein the sensory input is disposed to provide information with respect to one or more of: a military circumstance, a bomb-defusing event, or industrial accident prevention.

298. Digital eyewear including a sensory input coupleable to a user and disposed to receive a continuous image from an external source, wherein the continuous image represents an augmented reality or virtual reality experience; wherein the augmented reality or virtual reality experience is disposed to be altered or generated by an entity other than the user; a computing device coupled to the sensory input, the computing device disposed to adjust receipt of the continuous image; wherein the computing device is disposed to periodically or randomly provide a sequence of discontinuous images, the sequence of discontinuous images being integrable by the user to provide a virtual continuous image when viewed; wherein the virtual continuous image improves the user’s ability to sense the external source without degrading the user’s audio or visual acuity with respect to the object; wherein the user can either (A) experience, (B) receive feedback with respect to, (C) learn from, or (D) be assessed with respect to, the altered or generated augmented reality or virtual reality experience.

299. Digital eyewear as in claim 298, wherein the computing device is disposed to provide feedback to the user with respect to the altered or generated augmented reality or virtual reality experience.

300. Digital eyewear as in claim 298, wherein the computing device is disposed to provide feedback to the user with respect to the altered or generated augmented reality or virtual reality experience; wherein the user can learn how their performance can be improved with respect to the altered or generated augmented reality or virtual reality experience.

301. Digital eyewear as in claim 300, wherein the computing device is disposed to provide information, with respect to the user’s actions with respect to the altered or generated augmented reality or virtual reality experience, to one or more of: an evaluator, an instructor, a supervisor, another person, or another device.

302. Digital eyewear as in claim 298, wherein the augmented reality or virtual reality experience includes information from a driving exercise by another driver.

303. Digital eyewear as in claim 302, wherein the driving exercise is disposed to be performed by one or more of: a race car driver, motorcyclist, dirt biker, or bicyclist.

304. Digital eyewear as in claim 298, wherein the augmented reality or virtual reality experience includes information from sensory recording equipment, the sensory recording equipment including one or more of: au- dio/video recording equipment, haptic recording equipment, or olfactory recording equipment.

305. Digital eyewear as in claim 301, wherein the computing device is disposed to score the user with respect to the user’s actions in response to the altered or generated augmented reality or virtual reality experience.

306. Digital eyewear as in claim 298, wherein the driving exercise is disposed to be performed by one or more of: a celebrity, a friend of the user, an instructor or supervisor, a known expert, a person already familiar with the driving experience, another person, or the user’s own past performance.

307. Digital eyewear as in claim 306, wherein the computing device is disposed to score the user with respect to a comparison of the user’s actions with another user or the user’s own past performance.

308. Digital eyewear as in claim 298, wherein the driving exercise is disposed to be performed in response to a measure of the user’s experience at the exercise.

309. Digital eyewear as in claim 308, wherein the computing device is disposed to score the user with respect to a comparison of the user’s actions with the measure of the user’s experience at the exercise.

310. Digital eyewear as in claim 309, wherein the score is responsive to one or more of: the fastest, safest, most interesting, most scenic, most entertaining, or most exciting, performance with respect to driving the course.

311. Digital eyewear as in claim 308, wherein the computing device is disposed to score the user with respect to a measure of capability of the user’s equipment used with the exercise.

312. Digital eyewear as in claim 298, wherein the augmented reality or virtual reality experience includes information from an exercise including one or more of: performing in a real-world course for which legal or practical restrictions prevent access, performing in an artificial course that is not currently believed to be physically possible, performing in an artificial course that uses differing laws of physics, performing in an exercise modeled on an environment in which the player does not use a vehicle, performing in an law enforcement or emergency responder exercise, performing in another exercise.

313. Digital eyewear as in claim 298, wherein the augmented reality or virtual reality experience includes information from an exercise including one or more of: baking or cooking, ballet or dancing, conducting a medical examination, construction work, interrogating witnesses, performing gymnastics or other athletic skills, performing surgery, piloting a fighter aircraft, playing a musical instrument, playing a sport, playing master-level chess, playing poker, recognizing deception, safely performing law enforcement work, sexing eggs, singing alone, singing with a group, or another skill not easily represented in a symbolic form.

314. Digital eyewear as in claim 298, wherein the virtual continuous image improves the user’s audio or visual acuity with respect to the object without interfering with the user’s normal sensory activity.

315. Digital eyewear as in claim 298, wherein the virtual continuous image provides a relatively greater amount of audio or visual acuity with respect to the object when sensed by the user.

316. Digital eyewear as in claim 315, wherein the user’s sensory image of the object includes an audio or visual sense.

317. Digital eyewear as in claim 298, wherein the sensory input includes a continuous visual image from the external source; the periodic or random sequence of discontinuous images are tuned with respect to a frequency of the relatively rapid motion of the object; the virtual continuous image is disposed to allow the user to see the object in a substantially stationary position in the user’s field of view while the object is moving relative to the user.

318. Digital eyewear as in claim 317, wherein the relatively rapid motion includes a substantially rotational motion; whereby the virtual continuous image is disposed to allow the user to see the object in a substantially stationary position in the user’s field of view while the object is rotating.

319. Digital eyewear as in claim 317, wherein the virtual continuous image is disposed to allow the user to examine the object for one or more of: cracks, dents, other damage, loose attachment, maladjustment, or misalignment, while the object is moving.

320. Digital eyewear as in claim 319, wherein the object includes a wheel; the virtual continuous image is disposed to allow the user to determine whether the object is properly centered, while the object is rotating.

321. Digital eyewear as in claim 319, wherein the object is disposed on a lathe or another rotating base; the virtual continuous image is disposed to allow the user to determine whether the object has any damage, while the rotating base is rotating.

322. Digital eyewear as in claim 319, wherein the object includes a turbine blade or another machine part; the virtual continuous image is disposed to allow the user to examine the machine part while the machine is in operation.

323. Digital eyewear as in claim 319, wherein the virtual continuous image is disposed to allow the user to determine whether the turbine blade is properly aligned with respect to another turbine blade while the machine is in operation.

324. Digital eyewear as in claim 298, wherein the sensory input includes a continuous audio input from the external source; the periodic or random sequence of discontinuous images include a sequence of audio signals; the sequence of audio signals are tuned with respect to a frequency of the continuous audio input; whereby the virtual continuous image is disposed to allow the user to hear a selected portion of the continuous audio input.

325. Digital eyewear as in claim 324, wherein the virtual continuous image is disposed to allow the user to hear a periodically repeated portion of the continuous audio input.

326. Digital eyewear as in claim 324, wherein the sensory input includes a continuous audio input from the external source, the continuous audio input including a signal outside a range of human hearing; the virtual continuous image is disposed to allow the user to hear, within the range of human hearing, a portion of the continuous audio input otherwise outside the range of human hearing.

327. Digital eyewear as in claim 324, wherein the sensory input includes a continuous audio input from an engine or another operable device; the virtual continuous image is disposed to allow the user to hear whether the operable device is emitting any unexpected sounds, any other audio evidence of damage or mistuning, or whether it is exhibiting signs of being about to fail.

328. Digital eyewear as in claim 298, wherein the sensory input is disposed to be received without use of tracking a direction or focal length of the user.

329. Digital eyewear as in claim 328, wherein the digital eyewear is disposed to operate with respect to the user’s entire field of view; the user’s audio or visual acuity is improved with respect to an ambient environment.

330. Digital eyewear as in claim 328, wherein the digital eyewear is disposed to remove a selected distraction from the user’s ambient environment, without having to determine in which direction or at what focal length the user is looking.

331. Digital eyewear as in claim 298, including a dynamic eye tracking mechanism disposed to determine one or more of: a direction or focal length associated with the user’s vision; wherein one or more of: the sensory input or the continuous image, are responsive to the dynamic eye tracking mechanism.

332. Digital eyewear as in claim 298, including a dynamic eye tracking mechanism disposed to determine one or more of: a direction or focal length associated with the user’s vision; wherein the dynamic eye tracking mechanism is disposed to determine one or more of: a direction of, or a distance to, the object.

333. Digital eyewear as in claim 298, including a dynamic eye tracking mechanism disposed to determine one or more of: a direction or focal length associated with the user’s vision; wherein the computing device is couplable to the dynamic eye tracking mechanism and disposed to control the dynamic eye tracking mechanism in response to an input from the user.

334. Digital eyewear as in claim 298, including a dynamic eye tracking mechanism disposed to determine one or more of: a direction or focal length associated with the user’s vision; wherein the computing device is couplable to the sensory input or the continuous image, and disposed to provide the sequence of discontinuous images in response to a movement of the object.

335. Digital eyewear as in claim 298, including a dynamic eye tracking mechanism disposed to determine one or more of: a direction or focal length associated with the user’s vision; wherein the computing device is couplable to the sensory input or the continuous image, and disposed to provide the sequence of discontinuous images in response to an input from the user.

336. Digital eyewear as in claim 298, including a dynamic eye tracking mechanism disposed to determine one or more of: a direction or focal length associated with the user’s vision; wherein the computing device is couplable to the sensory input or the continuous image, and disposed to provide the select a frequency for the sequence of discontinuous images in response to a movement of the object.

337. Digital eyewear as in claim 298, including a dynamic eye tracking mechanism disposed to determine one or more of: a direction or focal length associated with the user’s vision; wherein the computing device is couplable to the sensory input or the continuous image, and disposed to provide the select a frequency for the sequence of discontinuous images in response to an input from the user.

338. Digital eyewear as in claim 337, wherein the computing device is disposed to select an object and to select a frequency at which to operate at which the user’s audio or visual acuity with respect to that particular object is maximized.

339. Digital eyewear as in claim 337, wherein the object includes an object at a sports event; the digital eyewear is disposed to improve the user’s audio or visual acuity with respect to that object’s speed and direction of travel with respect to the user.

340. Digital eyewear as in claim 298, wherein the virtual continuous image improves the user’s audio or visual acuity with respect to the object without interfering with the user’s normal sensory activity.

341. Digital eyewear as in claim 298, wherein the virtual continuous image provides a relatively greater amount of audio or visual acuity with respect to the object when sensed by the user.

342. Digital eyewear as in claim 341, wherein the user’s sensory image of the object includes an audio or visual sense.

343. Digital eyewear as in claim 298, wherein the sensory input includes a continuous audio input from the external source; the periodic or random sequence of discontinuous images include a sequence of audio signals; the sequence of audio signals are tuned with respect to a frequency of the continuous audio input; whereby the virtual continuous image is disposed to allow the user to hear a selected portion of the continuous audio input.

344. Digital eyewear as in claim 343, wherein the sensory input includes a continuous audio input from the external source, the continuous audio input including a signal outside a range of human hearing; the virtual continuous image is disposed to allow the user to hear, within the range of human hearing, a portion of the continuous audio input otherwise outside the range of human hearing.

345. Digital eyewear as in claim 343, wherein the sensory input includes a continuous audio input from an engine or another operable device; the virtual continuous image is disposed to allow the user to hear whether the operable device is emitting any unexpected sounds.

346. Digital eyewear as in claim 298, wherein the sensory input is disposed to be received without use of tracking a direction or focal length of the user.

347. Digital eyewear as in claim 346, wherein the digital eyewear is disposed to operate with respect to the user’s entire field of view; the user’s audio or visual acuity is improved with respect to an ambient environment.

348. Digital eyewear as in claim 346, wherein the digital eyewear is disposed to remove a selected distraction from the user’s ambient environment, without having to determine in which direction or at what focal length the user is looking.

349. Digital eyewear as in claim 298, including a dynamic eye tracking mechanism disposed to determine one or more of: a direction or focal length associated with the user’s vision; wherein one or more of: the sensory input or the continuous image, are responsive to the dynamic eye tracking mechanism.

350. Digital eyewear as in claim 298, including a dynamic eye tracking mechanism disposed to determine one or more of: a direction or focal length associated with the user’s vision; wherein the dynamic eye tracking mechanism is disposed to determine one or more of: a direction of, or a distance to, the object.

351. Digital eyewear as in claim 298, including a dynamic eye tracking mechanism disposed to determine one or more of: a direction or focal length associated with the user’s vision; wherein the computing device is couplable to the dynamic eye tracking mechanism and disposed to control the dynamic eye tracking mechanism in response to an input from the user.

352. Digital eyewear as in claim 298, including a dynamic eye tracking mechanism disposed to determine one or more of: a direction or focal length associated with the user’s vision; wherein the computing device is couplable to the sensory input or the continuous image, and disposed to provide the sequence of discontinuous images in response to a movement of the object.

353. Digital eyewear as in claim 298, including a dynamic eye tracking mechanism disposed to determine one or more of: a direction or focal length associated with the user’s vision; wherein the computing device is couplable to the sensory input or the continuous image, and disposed to provide the sequence of discontinuous images in response to an input from the user.

354. Digital eyewear as in claim 298, including a dynamic eye tracking mechanism disposed to determine one or more of: a direction or focal length associated with the user’s vision; wherein the computing device is couplable to the sensory input or the continuous image, and disposed to provide the select a frequency for the sequence of discontinuous images in response to a movement of the object.

355. Digital eyewear as in claim 298, including a dynamic eye tracking mechanism disposed to determine one or more of: a direction or focal length associated with the user’s vision; wherein the computing device is couplable to the sensory input or the continuous image, and disposed to provide the select a frequency for the sequence of discontinuous images in response to an input from the user.

356. Digital eyewear as in claim 355, wherein the computing device is disposed to select an object and to select a frequency at which to operate at which the user’s audio or visual acuity with respect to that particular object is maximized.

357. Digital eyewear as in claim 298, wherein the sensory input includes a continuous visual image from the external source; the periodic or random sequence of discontinuous images are tuned with respect to a frequency of the relatively rapid motion of an object in the augmented reality or virtual reality experience; the virtual continuous image is disposed to allow the user to see the object in a substantially stationary position in the user’s field of view while the object is moving relative to the user.

358. Digital eyewear as in claim 357, wherein the relatively rapid motion includes a substantially rotational motion; whereby the virtual continuous image is disposed to allow the user to see the object in a substantially stationary position in the user’s field of view while the object is rotating in the augmented reality or virtual reality experience.

359. Digital eyewear as in claim 357, wherein the virtual continuous image is disposed to allow the user to examine the object for one or more of: cracks, dents, other damage, loose attachment, maladjustment, or misalignment, while the object is moving in the augmented reality or virtual reality experience.

360. Digital eyewear as in claim 359, wherein the object includes a wheel; the virtual continuous image is disposed to allow the user to determine whether the object is properly centered, while the object is rotating in the augmented reality or virtual reality experience.

361. Digital eyewear as in claim 359, wherein the object is disposed on a lathe or another rotating base in the augmented reality or virtual reality experience; the virtual continuous image is disposed to allow the user to determine whether the object has any damage, while the rotating base is rotating.

362. Digital eyewear as in claim 359, wherein the object includes a turbine blade or another machine part in the augmented reality or virtual reality experience; the virtual continuous image is disposed to allow the user to examine the machine part while the machine is in operation.

363. Digital eyewear as in claim 359, wherein the virtual continuous image is disposed to allow the user to determine whether the turbine blade is properly aligned with respect to another turbine blade while the machine is in operation in the augmented reality or virtual reality experience.

364. Digital eyewear as in claim 298, wherein the sensory input includes a continuous audio input from the augmented reality or virtual reality experience; the periodic or random sequence of discontinuous images include a sequence of audio signals in the augmented reality or virtual reality experience; the sequence of audio signals are tuned with respect to a frequency of the continuous audio input; whereby the virtual continuous image is disposed to allow the user to hear a selected portion of the continuous audio input.

365. Digital eyewear as in claim 364, wherein the virtual continuous image is disposed to allow the user to hear a periodically repeated portion of the continuous audio input from the augmented reality or virtual reality experience.

366. Digital eyewear as in claim 364, wherein the sensory input includes a continuous audio input from the external source, the continuous audio input including a signal from the augmented reality or virtual reality experience that is outside a range of human hearing; the virtual continuous image is disposed to allow the user to hear, within the range of human hearing, a portion of the continuous audio input otherwise outside the range of human hearing.

367. Digital eyewear as in claim 364, wherein the sensory input includes a continuous audio input from an engine or another operable device in the augmented reality or virtual reality experience; the virtual continuous image is disposed to allow the user to hear whether the operable device is emitting any unexpected sounds in the augmented reality or virtual reality experience.

368. Digital eyewear as in claim 298, wherein the sensory input is disposed to be received from the augmented reality or virtual reality experience without use of tracking a direction or focal length of the user.

369. Digital eyewear as in claim 368, wherein the digital eyewear is disposed to operate with respect to the user’s entire field of view in the augmented reality or virtual reality experience; the user’s audio or visual acuity is improved with respect to an ambient environment portion of the augmented reality or virtual reality experience.

370. Digital eyewear as in claim 368, wherein the digital eyewear is disposed to remove a selected distraction from the augmented reality or virtual reality experience, without having to determine in which direction or at what focal length the user is looking.

371. Digital eyewear as in claim 298, including a dynamic eye tracking mechanism disposed to determine one or more of: a direction or focal length associated with the user’s vision in the augmented reality or virtual reality experience; wherein one or more of: the sensory input or the continuous image, are responsive to the dynamic eye tracking mechanism.

372. Digital eyewear as in claim 298, including a dynamic eye tracking mechanism disposed to determine one or more of: a direction or focal length associated with the user’s vision in the augmented reality or virtual reality experience; wherein the dynamic eye tracking mechanism is disposed to determine one or more of: a direction of, or a distance to, the object.

373. Digital eyewear as in claim 298, including a dynamic eye tracking mechanism disposed to determine one or more of: a direction or focal length associated with the user’s vision in the augmented reality or virtual reality experience; wherein the computing device is couplable to the dynamic eye tracking mechanism and disposed to control the dynamic eye tracking mechanism in response to an input from the user.

374. Digital eyewear as in claim 298, including a dynamic eye tracking mechanism disposed to determine one or more of: a direction or focal length associated with the user’s vision in the augmented reality or virtual reality experience; wherein the computing device is couplable to the augmented reality or virtual reality experience, and disposed to provide the sequence of discontinuous images in response to a movement of the object in the augmented reality or virtual reality experience.

375. Digital eyewear as in claim 298, including a dynamic eye tracking mechanism disposed to determine one or more of: a direction or focal length associated with the user’s vision; wherein the computing device is couplable to the augmented reality or virtual reality experience, and disposed to provide the sequence of discontinuous images in response to an input from the user.

376. Digital eyewear as in claim 298, including a dynamic eye tracking mechanism disposed to determine one or more of: a direction or focal length associated with the user’s vision in the augmented reality or virtual reality experience; wherein the computing device is couplable to the augmented reality or virtual reality experience, and disposed to provide the select a frequency for the sequence of discontinuous images in response to a movement of the object continuous image.

377. Digital eyewear as in claim 298, including a dynamic eye tracking mechanism disposed to determine one or more of: a direction or focal length associated with the user’s vision in the augmented reality or virtual reality experience; wherein the computing device is couplable to the augmented reality or virtual reality experience, and disposed to provide the select a frequency for the sequence of discontinuous images in response to an input from the user.

378. Digital eyewear as in claim 377, wherein the computing device is disposed to select an object and to select a frequency at which to operate at which the user’s audio or visual acuity with respect to that particular object is maximized in the augmented reality or virtual reality experience.

379. Digital eyewear as in claim 155, wherein the computing device is disposed to select the sequence of discontinuous images at a frequency of more than 25 Hz.

380. Digital eyewear as in claim 155, wherein the computing device is disposed to select the sequence of discontinuous images at a frequency of between 80 Hz and 100 Hz.

381. Digital eyewear including a sensory input disposed to receive a continuous image from an external source; a computing device coupled to the sensory input, the computing device disposed to adjust receipt of the continuous image; wherein the computing device is disposed to periodically or randomly provide a sequence of discontinuous images, the sequence of discontinuous images, being integrable to provide a virtual continuous image when viewed; the computing device is disposed to present at least a first portion of the sequence of discontinuous images to only a first eye, and is disposed to present at least a second portion of the sequence of discontinuous images to only a second eye; wherein the virtual continuous image improves the user’s ability to sense the external source without degrading the user’s audio or visual acuity with respect to the object.

382. Digital eyewear as in claim 381, wherein the computing device is disposed to present a sequence of either still images or moving images.

383. Digital eyewear as in claim 381, wherein the computing device is disposed to present a sequence of images in response to a background of a selected object in a user’s field of view.

384. Digital eyewear as in claim 381, wherein the computing device is disposed to present a sequence of images in response to a velocity of a selected object relative to a user.

385. Digital eyewear as in claim 381, wherein the computing device is disposed to present a sequence of images using an amount of shading /inverse -shading in response to a selected object in a user’s field of view.

386. Digital eyewear as in claim 381, wherein the computing device is disposed to select first images in the sequence of discontinuous images for presentation to the first eye; the computing device is disposed to select second images in the sequence of discontinuous images for presentation to the second eye.

387. Digital eyewear as in claim 386, wherein the computing device is disposed to alternate selecting images in the sequence of discontinuous images for presentation to the first eye and the second eye.

388. Digital eyewear as in claim 386, wherein the computing device is disposed to present images to only the first eye using color alteration.

389. Digital eyewear as in claim 386, wherein the computing device is disposed to present images to only the first eye using shading/ inverse-shading.

390. Digital eyewear as in claim 389, wherein the computing device is disposed to block light to the second eye, whereby images are presented to only the first eye.

391. Digital eyewear as in claim 381 , wherein the continuous image represents one or more of: an object that is in relatively rapid motion with respect to a user, or is otherwise difficult to see when the user is looking directly at the object, an object that is primarily viewable by the user using their peripheral vision, or using another portion of the user’s vision that has a lesser degree of natural acuity, or an object that involves the user’s rapid reaction thereto.

392. A system including digital eyewear disposed to receive an external sensory input in response to an external device; an external device disposed to perform a physical action that also is likely to overload one or more of an excess brightness or loudness of the sensory input; wherein the computing device is disposed to determine a sensory input in response to the physical action and to adjust the wearer’s receipt of the sensory input to remove a portion of the sensory input due to the physical action.

393. A system as in claim 155, wherein the sensory input includes eyewear, or an optical system, disposing a lens between the external source and a user’s eye; wherein the lens is disposed in one or more of: glasses or sunglasses, contact lenses, goggles, facemasks or helmets, or other eyewear.

394. A system as in claim 169, wherein the sensory input includes eyewear, or an optical system, disposing a lens between the external source and a user’s eye; wherein the lens is disposed in one or more of: glasses or sunglasses, contact lenses, goggles, facemasks or helmets, or other eyewear.

395. A system as in claim 176, wherein the sensory input includes eyewear, or an optical system, disposing a lens between the external source and a user’s eye; wherein the lens is disposed in one or more of: glasses or sunglasses, contact lenses, goggles, facemasks or helmets, or other eyewear.

396. A system as in claim 183, wherein the sensory input includes eyewear, or an optical system, disposing a lens between the external source and a user’s eye; wherein the lens is disposed in one or more of: glasses or sunglasses, contact lenses, goggles, facemasks or helmets, or other eyewear.

397. A system as in claim 194, wherein the external source is responsive to one or more of: an explosive, a flashbang grenade, a floodlight, a law enforcement operation, a loudspeaker, or another source of excessive audio/ video complexity.

398. A system as in claim 190, wherein the external source is disposed to perform a physical action that also is likely to overload one or more of an excess brightness or loudness of the sensory input; the computing device is disposed to determine a sensory input in response to the physical action and to adjust the wearer’s receipt of the sensory input to remove a portion of the sensory input due to the physical action.

399. A system as in claim 398, wherein the physical action includes an explosive activity.

400. A system as in claim 398, wherein the physical action includes an excessive amount of brightness or loudness.

401. A system as in claim 190, wherein the warning signal is disposed to indicate a time when the wearer’s receipt of the sensory input is projected to occur.

402. A system as in claim 401, wherein the warning signal is encrypted or obfuscated to prevent an intended receiver of the warning signal from interpreting it.

Description:
Dynamic visual optimization

Incorporated Disclosures

[1] Priority Claim. This Application describes technologies that can be used with inventions, and other technologies, described in one or more of the following documents. This Application claims priority, to the fullest extent permitted by law, of these documents.

[2] This Application is a continuation-in-part of

— Application 16/264,553, filed Jan. 31, 2019, naming inventor Scott LEWIS, titled “Digital eyewear integrated with medical and other services”, Attorney Docket No. 6401, currently pending; which is a continuation-in-part of

— Application 16/ 138,941, filed Sept. 21, 2018, naming the same inventor, titled “Digital eyewear procedures related to dry eyes”, Attorney Docket No. 6301, currently pending; which is a continuation-in-part of

— Application 15/942,951, filed Apr. 2, 2018, naming the same inventor, titled “Digital Eyewear System and Method for the Treatment and Prevention of Migraines and Photophobia”, Attorney Docket No. 6201, currently pending; which is a continuation-in-part of

— Application 15/460,197, filed March 15, 2017, naming the same inventor, titled “Digital Eyewear Augmenting Wearer’s Interaction with their Environment”, unpublished, Attorney Docket No. 6101, currently pending.

[3] This Application is a continuation-in-part of

— Application 16/684,479, filed Nov. 14, 2019, naming inventor Scott LEWIS, titled “Digital visual optimization”, Attorney Docket No. 6501, currently pending; which is a continuation-in-part of — Application 16/264,553, filed Jan. 31, 2019, naming inventor Scott LEWIS, titled “Digital eyewear integrated with medical and other services”, Attorney Docket No. 6401, currently pending; which is a continuation-in-part of

— Application 16/ 138,941, filed Sept. 21, 2018, naming the same inventor, titled “Digital eyewear procedures related to dry eyes”, Attorney Docket No. 6301, currently pending; which is a continuation-in-part of

— Application 15/942,951, filed Apr. 2, 2018, naming the same inventor, titled “Digital Eyewear System and Method for the Treatment and Prevention of Migraines and Photophobia”, Attorney Docket No. 6201, currently pending; which is a continuation-in-part of

— Application 15/460,197, filed March 15, 2017, naming the same inventor, titled “Digital Eyewear Augmenting Wearer’s Interaction with their Environment”, unpublished, Attorney Docket No. 6101 currently pending.

[4] Application 15/460,197, filed March 15, 2017, is a continuation-in-part of

— Application 13/841,550, filed Mar. 15, 2013, naming the same inventor, titled “Enhanced Optical and Perceptual Digital Eyewear”, Attorney Docket No. 5087 P, currently pending; and is also a continuation-in-part of

— Application 14/660,565, filed Mar. 17, 2015, naming the same inventor, and having the same title, Attorney Docket No. 5266 C3, currently pending.

[5] Application 14/660,565, filed Mar. 17, 2015, is a continuation of

— Application 14/589,817, filed Jan. 5, 2015, naming the same inventor, and having the same title, Attorney Docket No. 5266 C1C2, currently pending; which is a continuation of

— Application 14/288, 189, filed May 27, 2014, naming the same inventor, and having the same title, Attorney Docket No. 5266 C1C, currently pending; which is a continuation of

— Application 13/965,050, filed Aug. 12, 2013, naming the same inventor, and having the same title, Attorney Docket No. 5266 Cl, currently pending; which is a continuation of

— Application 13/841,141, filed Mar. 15, 2013, naming the same inventor, and having the same title, Attorney Docket No. 5266P, now issued as US 8,696, 113 on Apr. 15, 2014.

[6] Each of these documents is hereby incorporated by reference as if fully set forth herein. Techniques described in this Application can be elaborated with detail found therein. These documents are sometimes referred to herein as the “Incorporated Disclosures,” or variants thereof.

Copyright Notice

[7] A portion of the disclosure of this patent document contains material subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.

Background

[8] Field of the Disclosure. This Application generally describes techniques relating to digital visual optimization, and other issues.

[9] Related Art. Human eyesight and other senses are best suited to environments in which there is relatively little sensory noise, and in which human cognitive systems can relatively easily process sensory information. One problem that has arisen occurs when sensory information overloads human cognitive systems. Human cognitive systems can be overloaded when sensory information exceeds sensory processing limits, such as when sensing overly bright light or overly loud sound. Human cognitive systems can also be overloaded when sensory information exceeds cognitive limits, such as when sensing an overly complex video or audio. When human cognitive systems are overloaded, this can degrade human visual acuity. [10] Each of these issues, as well as other possible considerations, might cause difficulty in aspects of addressing the problems relating to sensory inputs, for one or more senses, that can exceed sensory processing limits or cognitive processing limits, or that otherwise degrade visual acuity.

Summary of the Disclosure

[11] This summary of the disclosure is provided as a convenience to the reader. It is not intended to limit or restrict the scope of the disclosure or of the invention. This summary is intended as an introduction to more detailed description found in this Application, and as an overview of techniques explained in this Application. The described techniques have applicability in other fields and beyond the embodiments specifically reviewed in detail.

[12] This Application describes devices, and methods for using them, capable of optimizing sensory inputs so as to allow observation of those sensory inputs, while ameliorating limits generally imposed by sensory processing limits or cognitive limits, and while providing improved static, dynamic, or peripheral, visual acuity. Devices can include digital eyewear that detect problematic sensory inputs and adjust one or more of: (A) the sensory inputs themselves, (B) the user’s receipt of those sensory inputs, or (C) the user’s sensory or cognitive reaction to those sensory inputs. Devices can provide improved static, dynamic, or peripheral, visual acuity.

[13] More specifically, and without limitation, this Application describes devices, and methods for using them, capable of improving real time sensory inputs and cognitive processing thereof, and providing improved static, dynamic, or peripheral, visual acuity, for users to better perform tasks (such as possibly in real time). This is distinguished from slow rate strobe techniques that prompt the user to perform distinctly different cognitive functions, such as techniques designed to prompt the user to memorize a task or to perform a task without any sensory input at all. Thus, this Application is primarily directed to users while actually performing an activity (such as possibly in real time), possibly under adverse conditions, not while training ahead of time for that activity. [14] For example, this Application describes devices, and methods for using them, capable of providing the user with the ability to receive sensory inputs and process them cognitively (A) without motion blur; (B) without blur due to differences between relatively focused and unfocused fields of view, such as peripheral vision or other non-frontal vision; (C) while focusing on particular objects without distraction from backgrounds or from irrelevant objects; (D) while obtaining more information about objects in the user’s field of view, such as by viewing those objects as substantially still images despite relative motion between the user and object; (E) while performing other activities involving relative motion between a user and an object; and otherwise.

[15] In particular embodiments, this Application describes devices, and methods for using them, capable of providing the user with the ability to receive sensory inputs and process them cognitively (A) while participating in or viewing sports, such as baseball, basketball, football, golf, racing, shooting (such as shooting skeet), skateboarding, skiing, soccer, tennis and table tennis, video games (including first-person shooters and otherwise), and variants thereof; (B) while conducting or reviewing law enforcement or military operations, such as decisions whether to shoot, piloting, use of suppression devices, and otherwise; (C) while conducting or reviewing search/ rescue operations, emergency responder operations, or other observational operations, such as decisions whether to look more closely, look for more detail, or otherwise identify subjects, and otherwise; (D) while experiencing medical conditions, such as autism and autism spectrum disorders, ADHD, PTSD and other psychological triggers of trauma, migraines, photophobia, neuro -ophthalmic disorders, and variants thereof; and otherwise.

[16] This Application also describes devices, and methods for using them, capable of providing the user with the ability to receive sensory inputs and process them cognitively in one or more of the following circumstances:

— (A) while using eyewear and associated optical systems, such as glasses or sunglasses, contact lenses, goggles, facemasks or helmets, or other eyewear;

— (B) while using optical systems, such as automobile mirrors and windshields, binoculars, cameras, computer screens (including laptops, tablets, smartphones, and otherwise), microscopes and telescopes, or rifle scopes or other gun scopes; — (C) while coaching, performing as an agent, or scouting for talent, or otherwise, with respect to sports or players;

— (D) while coaching, observing, training, or otherwise, with respect to activities where the presence of a secondary party could interfere; such as aircraft piloting, bomb defusal, combat training, contact sports, horseback riding, marksmanship, military training, or otherwise;

— (E) while participating in entertainment, such as interactive entertainment, comedy or horror shows, dinner theater, live action role playing, or otherwise;

— (F) while observing entertainment, such as being a spectator for a careful or highspeed sport; such as marksmanship, automobile racing or skiing, or otherwise;

— or otherwise as further described herein.

[17] This Application also describes devices, and methods for using them, capable of providing the user with the ability to receive sensory inputs and process them cognitively while participating in activities in which visual acuity is valuable to the viewer, such as:

— (A) operating a flying vehicle, such as an aircraft, an ultralight aircraft, a glider, a hang-glider, a helicopter, or a similar vehicle;

— (B) operating a ground vehicle, such as an automobile, a race car, or a similar vehicle;

— (C) operating a water vehicle, such as a kayak, motorboat, sailboat or yacht, speedboat, a cigarette boat, or a similar vehicle;

— (D) operating a motorcycle, a dirt bike, a bicycle, a unicycle, or a similar vehicle;

— (E) participating in a sport using relatively rapid sports equipment, such as baseball, basketball, an equestrian sport (such as dressage or horse racing), football, field hockey, ice hockey, jai alai, lacrosse, a snow sport (such as skiing, sledding, snowboarding, operating a snowmobile, or tobogganing or luge), soccer, or a similar sport;

— (F) participating in an activity in which shooting might occur, such as hunting, “laser tag”, skeet shooting (or otherwise shooting at a moving target), target shooting, or a similar activity;

— (G) participating in an activity in which using a sight (whether stereoscopic or not) might occur, such as using binoculars, using a rifle sight, or photography (whether still photography or motion-picture photography); — (H) participating in an activity in which tracking moving equipment, such as viewing rotating turbines or wheels, or for which it is useful to tune a viewing frequency to a frequency of angular position or movement, so as to operate in synchrony therewith;

— (I) participating in an activity in which critical, such as life -critical, decisions are made, such as performing as an emergency responder, emergency room personnel, a law enforcement officer, military personnel, or a similar activity;

— or otherwise as further described herein.

[18] After reading this Application, those skilled in the art would see that these and other advantages with respect to users’ possibly- adverse reactions to sensory inputs can be provided by systems and methods described herein.

Brief Description of the Figures

[19] In the figures, like references generally indicate similar elements, although this is not strictly required.

[20] Fig. 1 shows a conceptual drawing of an example digital eyewear system.

[21] Fig. 2 (collectively including/ig. 2A-2B) shows a conceptual drawing of example sensory inputs including possible sensory overload or perceptual noise.

[22] Fig. 3 (collectively including/ig. 3A-3B) shows a conceptual drawing of example adjustments to sensory inputs.

[23] Fig. 4 (collectively including fig. 4A-4C) shows a conceptual drawing of example sensory inputs including possible cognitive overload.

[24] Fig. 5 shows a conceptual drawing of example adjustment of user sensory systems.

[25] Fig. 6 (collectively including fig. 6A-6E) shows a conceptual drawing of an example method of using a digital eyewear system.

[26] Fig. 7 shows a conceptual drawing of some example additional applications and embodiments.

[27] Fig. 8 (collectively including fig. 8A-8D) shows a conceptual drawing of an example method of using a digital eyewear system.

[28] After reading this Application, those skilled in the art would recognize that the figures are not necessarily drawn to scale for construction, nor do they necessarily specify any particular location or order of construction. Detailed Description

GENERAL DISCUSSION

[29] As further described herein, devices and methods for using them are capable of optimizing sensory inputs, both for improved sensory ability and for improved cognitive ability to process those sensory inputs.

[30] Human sensory inputs, such as viewable scenes, can include elements that substantially overload human sensory capabilities, such as when they are excessively or suddenly bright. This can occur due to glare, due to sudden changes in brightness or loudness, due to sudden changes in a person’s relationship to a source of brightness or loudness (such as when entering or exiting an enclosed space), due to audio /video inputs where human sensory inputs lack high precision (such as distant or peripheral video inputs, or such as audio/ video inputs near the limits of human sensory detection), or due to other factors, or otherwise.

[31] Human sensory inputs can also include elements that substantially overload human cognitive capabilities, such as when they are excessively or suddenly noisy, which can interfere with recognition of objects or their position in the user’s field of view, when there is substantial motion between the user and an object, which can cause motion blur and also interfere with recognition of objects or their position in the user’s field of view, or when the object is presented to the user in a portion of the user’s field of view that has lesser natural visual acuity, such as a peripheral vision portion or other non-frontal visual portion of the user’s field of view. This can occur due to excessively complex or otherwise unusual audio/video sensory patterns, such as due to fast-moving or otherwise quickly changing objects, objects being sensed against a noisy or otherwise disruptive background, due to inability of human cognitive abilities to “keep up” with moving or spinning objects, due to presentation in a portion of the human field of view with a lesser natural visual acuity, due to other factors, or otherwise.

[32] For example, when the user moves (by linear motion, by changing position, or by changing angle of view), objects in the user’s field of view can be subject to motion blur. Similarly, when the user is moving, particularly at speed, objects in areas of the user’s unfocused field of view (such as in the user’s peripheral vision or other regions upon which the user’s view is not directly focused) can be subject to motion blur or other blur. Similarly, when the user is moving, particularly radially, objects in portions of the user’s field of view with substantially greater visual acuity can appear in portions of the user’s field of view with substantially lesser visual acuity, the latter possibly including portions relating to peripheral vision or other non-frontal visual areas. For another example, when the user is focusing on a particular object, the user’s ability to distinguish that object, or properties of its movement, can be limited by distraction from backgrounds or from irrelevant objects. This can be of particular concern when the user needs to react quickly, or when the user needs to evaluate that object, or properties of its movement.

[33] For another example, when the user is participating in entertainment, such as interactive entertainment, it can be useful to focus the user’s attention on particular aspects of the presentation thereof, such as drawing the user’s attention toward (or away from) selected actors, props, or scenery. In such cases, such as comedy or horror shows, it can aid the presentation to prompt the user to notice otherwise subtle aspects of the presentation, or to be surprised by otherwise clear aspects of the presentation, to enhance the entertainment value thereof.

Enhancing cognitive processing

[34] Enhancement of human cognitive processing of sensory inputs can be of particular value when the user is participating in or viewing sports. Many such sports include rapid movement of balls or other objects that a participant or viewer wishes to accurately view or to rapidly react. Accurate and rapid identification of those objects and their trajectories can make the difference between scoring and failing to do so. Similarly, enhancement of human cognitive processing of sensory inputs can be of particular value when the user is coaching players or performing as an agent or talent scout. Accurate identification of the players’ movement, speed, and other characteristics of their actions can be important in identifying prospective athletes. Similarly, enhancement of human cognitive processing of sensory inputs can be of particular value when the user is observing sports or other competitive activities. Accurate identification of the players’ (or objects’) movement, speed, and other characteristics of their actions can be important in determining what is occurring in the event.

[35] For example, when the user is observing or participating in a sport in which speed is relevant, it can occur that objects the user sees are blurred due to their speed. In automobile racing or skiing, or in sports where a small object travels at high speed, such as baseball or golf, or at aircraft demonstration shows, the objects the user sees can be blurred when the object moves quickly past the user at close range, while the objects the user sees can be difficult to identify when the object moves past the user at a distance. In such cases, a system can determine when motion blur presents a problem for observers, and can adjust the user’s sensory inputs so as to ameliorate the effect of motion blur and enhance visual acuity; similarly, the system can determine when distance of a small object presents a problem for observers, and can adjust the user’s sensory inputs so as to ameliorate the effect of distance of small objects and so as to enhance the user’s visual acuity.

[36] For another example, when the user is observing or participating in a sport in which precision is relevant, it can occur that objects the observer sees are difficult to identify with sufficient precision to obtain full value from observing. In sports where careful aim is relevant, such as marksmanship, baseball (such as pitching at a strike zone), golf (such as putting), objects the observer sees can be at sufficient distance, or be obscured, so as to prevent the user from seeing those objects sufficiently well to determine whether players are aiming properly. In such cases, a system can determine when precision presents a problem for observers, and can adjust the observer’s sensory inputs so as to ameliorate the effect of distance on observing precision and so as to enhance the user’s visual acuity.

[37] In sports where objects can be directed from a side or other non-frontal direction with respect to the user, it can occur that objects the user sees are difficult to identify with sufficient precision, or otherwise difficult to see and identify in sufficient time, due to the user’s naturally lesser visual acuity from a peripheral or other non-frontal direction. In such cases, a system can determine when objects enter the user’s field of view from a peripheral or other non-frontal direction; the system can adjust the user’s sensory inputs so as to ameliorate the effect of observing the object from a peripheral or other non-frontal direction.

[38] For another example, when the user is observing an event or a scene while using an optical system, such as binoculars, a camera, a computer screen, a microscope or telescope, a rifle scope or other gun scope, or otherwise, it can occur that objects the user sees are blurred due to their speed, difficult to see due to their size, or obscured due to being embedded in a similar environment or being hidden at least partially behind another object, or lacking visual acuity because they are disposed in a peripheral or otherwise non-frontal portion of the user’s field of view. In such cases, a system can determine an effect from the optical system, and can adjust the user’s sensory inputs so as to ameliorate effects from the optical system on the user’s field of view, and so as to enhance the user’s visual acuity.

[39] Enhancement of human cognitive processing of sensory inputs and visual acuity can also be of particular value when the user is participating in or viewing law enforcement or military operations. These operations can require accurate and rapid decisions with respect to threat assessment and responses thereto. Similar to other activities, cognitive noise from distracting backgrounds or irrelevant objects, motion blur, and other limits on human cognitive capabilities or visual acuity, can make the difference between success or failure, and can influence whether there are significant injuries or even loss of life.

Correcting cognitive processing and improving visual acuity

[40] Correction of human cognitive processing of sensory inputs, and improving human visual acuity, can be of particular value when the user is subject to a medical condition that affects the user’s capability for cognitive processing of sensory inputs, or otherwise affects the user’s visual acuity. For example, when the user’s cognitive processing is overloaded by sensory inputs, removing some of those sensory inputs can allow the user to more accurately process the remainder. For another example, when the user’s cognitive processing is disturbed (“triggered”) by particular sensory inputs, removing those particular sensory inputs can allow the user to more easily process the remainder. For another example, when the user’s visual acuity is naturally less that its greatest, such as when the user attempts to view objects from a peripheral or otherwise non-frontal portion of their field of view, altering at least some of those particular sensory inputs can allow the user to more easily process them.

[41] Correcting human cognitive processing of sensory inputs can also be of particular value when the user is subject to one or more cognitive disabilities. Accurate and rapid correction of cognitive disabilities can make the difference between a patient’s ability to operate effectively or otherwise.

— Cognitive disabilities can include psychological disorders such as: depression, bipolar disorder (sometimes known as “manic depression”), post-traumatic stress disorder (sometimes known as PTSD or ePTSD), schizophrenia, other psychological disorders or possibly personality disorders, or otherwise, each of which can affect human cognitive processing.

— Cognitive disabilities can also include congenital or related cognitive disorders, such as: ADD or ADHD, autism or autism spectrum disorder, epilepsy, migraines/pho- tophobia or other types of excessively severe headaches, or otherwise, each of which can affect human cognitive processing.

— Cognitive disabilities can also include disorders due to, or related to, brain or head injuries, chemical imbalances (whether self-induced due to self-medication with alcohol or recreational drugs, or otherwise), and related issues, such as: concussion or traumatic brain injury (TBI), hallucination or delusions, “jet lag”, other disturbances of the circadian rhythm such as extended “night shift” or “all-nighter” work, sleep deprivation, or otherwise, each of which can affect human cognitive processing.

— Cognitive disabilities can also include disorders due to aging or disease, such as: Alzhiemer’s, Parkinson’s, each of which can affect human cognitive processing.

[42] For example, patents who are subject to PTSD or ePTSD (complex or continual PTSD) can react excessively negatively or severely to audio/ video inputs that are “triggers” or are related to events that are sources of PTSD symptoms. Examples include PTSD symptoms from combat, war zones, or witnessing criminal activity; loud noises, automobile backfires, or fireworks, can be mistaken for, or can trigger reactions as if they were, gunshots or explosions. In such cases, audio sensory inputs can be detected and determined to be likely to trigger the patient’s PTSD, and can be adjusted (such as by treating input audio to remove sudden loud noises) to reduce the probability of the patient suffering a PTSD reaction.

[43] For another example, patients who are subject to depression (or depressive aspects of bipolar disorder) can react positively to exercise, fresh air, human interaction sunlight, or water. In such cases, when a system determines that the patient is subject to depression or depressive aspects of any other psychological disorder, the system can adjust the patient’s environment to make the patient more likely to react less depres- sively, or less likely to react more depressively. The system can prompt the patient’s eye toward brighter, sunnier, or more positive scenes; can adjust sensory inputs to include more blue or green frequencies of light; can adjust sensory inputs to include more positive audio inputs (such as music); can prompt the patient’s eyes to blink or otherwise apply water thereto; or otherwise.

[44] For another example, patients who are subject to epilepsy, can react excessively negatively or severely to flashing lights or sudden changes in light levels. In such cases, when a system determines that the patient is subject to epilepsy, or might suffer from adverse audio/ video inputs, the system can adjust the patient’s environment to make the patient less likely to suffer from a seizure, or to make the patient more likely to recover from a seizer. The system can prompt the patient’s eye toward less adverse stimuli, can filter audio/video inputs to remove triggering inputs, or otherwise.

[45] For another example, patients who are subject to “jet lag”, other disturbances of the circadian rhythm such as extended “night shift” or “all-nighter” work, sleep deprivation, or related issues, can react to stimuli in a sluggish manner, or fail to notice those stimuli entirely. In such cases, when a system determines that the patient is subject to disturbances of the circadian rhythm, the system can adjust the patient’s environment to make the patient’s sensory inputs more clear, more stimulating, or otherwise. The system can prompt the patient’s eye toward the relevant stimuli, can increase the intensity or speed of those stimuli, can decrease the intensity of light frequencies toward the blue end of the visible spectrum, or otherwise.

Detecting sensory inputs

[46] Digital eyewear can detect problematic sensory inputs in response to possible sensory processing limits. For example, sensory processing limits might occur in response to excessive video luminosity or audio loudness. Excessive video luminosity might occur in response to overly bright background lighting, floodlights, glare, sudden brightness changes, or other sources; excessive audio loudness might occur in response to overly loud machinery such as aircraft engines or artillery, sound amplifiers, sudden loudness changes, or other sources. Digital eyewear can also detect problematic sensory inputs in response to possible cognitive limits. For example, cognitive limits might occur in response to excessive audio/video complexity or noise. Excessive audio/video complexity or noise might occur in response to rapidly changing images or sounds, images or sounds with multiple components, images or sounds with substantial random components, or other sources.

[47] As further described herein, devices and methods for using them are capable of receiving sensory inputs, processing those sensory inputs to determine whether they pose problems for human sensory limits, human cognitive limits, or some combination thereof, or otherwise. When detecting problematic sensory inputs, digital eyewear can adjust sensory inputs, adjust user receipt of those sensory inputs, adjust user reaction to those sensory inputs, or otherwise ameliorate effects of those sensory inputs.

[48] Digital eyewear can alternatively detect sensory inputs for which the digital eyewear desires the user’s attention to be drawn toward (or away from). For example, selected objects can identify themselves by radio or other signals, or can be recognized by digital eyewear using a machine learning or artificial intelligence technique. Examples of such objects include fast-moving objects; objects in selected user fields of view; sports equipment and players; law enforcement or military targets, aircraft or other vehicles, or suppression devices; medical or psychological triggers; entertainment selected actors, props, or scenery; or otherwise. Adjusting sensory inputs

[49] Digital eyewear can adjust the sensory inputs in response to a source direction of the sensory inputs or an overlay of sensory inputs. For example, a user using their peripheral vision (or another non-frontal portion of their field of view) might find it difficult to distinguish between similar objects. For another example, a user viewing an object that moves in front of the sun might experience sensory overload from solar brightness. Digital eyewear can also adjust the sensory inputs in response to a relative velocity between the user and viewable objects. For example, moving objects might tend to blur when they are too fast for the user’s cognitive systems to process. Digital eyewear can also adjust the sensory inputs in response to a signal from an audio or visual source that it is likely to overload the user’s sensory inputs. For example, sensory input overload might occur in response to a moving floodlight, which might warn the digital eyewear.

[50] As further described herein, devices and methods for using them can adjust sensory inputs deemed problematic. For example, excessively or suddenly bright or loud audio /video inputs can be filtered to remove elements that would cause human sensory overload or human cognitive overload. For another example, when objects move rapidly or suddenly against a background, human viewers might fail to properly sense them. In such cases, digital eyewear can adjust the sensory inputs to prevent human sensory overload or human cognitive overload.

Adjusting user receipt

[51] Digital eyewear can adjust the user’s receipt of sensory inputs, such as by filtering the sensory inputs before the user’s receipt thereof. For example, sensory overload from a particular direction can be mitigated using devices or methods that reduce intensity of sensory input. The intensity of sensory input can be reduced for the entire input or for only selected portions thereof. When warned, digital eyewear can mitigate sensory overload in advance thereof. When not warned, digital eyewear can mitigate sensory overload sufficiently rapidly that the user’s sensory or cognitive systems are not debilitated. Digital eyewear can also mitigate sensory overload using a buffer, in which digital eyewear receives sensory inputs, processes them, and provides the processed sensory inputs to the user so as to mitigate sensory overload before the user’s receipt thereof.

Adjusting user reaction

[52] Digital eyewear can also adjust the user’s reaction to sensory inputs, such as by prompting the user to adjust their sensory system. For example, digital eyewear can prompt the user’s pupil to adjust in size, with the effect that the user’s eye can protect against excess luminosity, rapid changes in luminosity, or otherwise. For another example, digital eyewear can prompt the user to look in a different direction, such as away from adverse sensory stimuli, with the effect that the user’s eye can avoid receipt of those adverse sensory stimuli, or such as in the direction of relevant objects, with the effect that the user’s eye can view those objects directly rather than using peripheral vision, or otherwise.

Use in specific activities

[53] Digital eyewear can also be disposed to provide the user with the ability to receive sensory inputs and process them cognitively while participating in activities in which visual acuity is valuable to the viewer, such as:

— (A) operating a flying vehicle, such as an aircraft, an ultralight aircraft, a glider, a hang-glider, a helicopter, or a similar vehicle;

— (B) operating a ground vehicle, such as an automobile, a race car, or a similar vehicle;

— (C) operating a water vehicle, such as a kayak, motorboat, sailboat or yacht, speedboat, a cigarette boat, or a similar vehicle;

— (D) operating a motorcycle, a dirt bike, a bicycle, a unicycle, or a similar vehicle;

— (E) participating in a sport using relatively rapid sports equipment, such as baseball, basketball, an equestrian sport (such as dressage or horse racing), football, field hockey, ice hockey, jai alai, lacrosse, a snow sport (such as skiing, sledding, snowboarding, operating a snowmobile, or tobogganing or luge), soccer, or a similar sport; — (F) participating in an activity in which shooting might occur, such as hunting, “laser tag”, skeet shooting (or otherwise shooting at a moving target), target shooting, or a similar activity;

— (G) participating in an activity in which using a sight (whether stereoscopic or not) might occur, such as using binoculars, using a rifle sight, or photography (whether still photography or motion-picture photography);

— (H) participating in an activity in which tracking moving equipment, such as viewing rotating turbines or wheels, or for which it is useful to tune a viewing frequency to a frequency of angular position or movement, so as to operate in synchrony therewith;

— (I) participating in an activity in which critical, such as life -critical, decisions are made, such as performing as an emergency responder, emergency room personnel, a law enforcement officer, military personnel, or a similar activity;

— or otherwise as further described herein.

[54] As described herein, these specific activities can involve circumstances in which the user would gain substantially from enhanced audio or visual acuity. Enhanced audio /video acuity can help the user in circumstances in which the user would find it valuable to view one or more of:

— (A) objects that are in relatively rapid motion with respect to the user, or are otherwise difficult to see when the user is looking directly at them;

— (B) objects that are primarily viewable using the user’s peripheral vision, or other portions of the user’s vision that have a lesser degree of natural acuity;

— (C) objects that involve the user’s immediate or otherwise rapid reaction thereto, such as sports equipment (such as baseballs or tennis balls), terrain (such as road tracks or other vehicles), user equipment by other persons (such as whether a device in a person’s hand is a cell phone or a handgun);

— (D) objects that are in motion with respect to the user, such as objects that are moving directly toward or away from the user, objects that are moving in a region of the user’s peripheral vision;

— (E) objects that are located poorly for viewing with respect to a background, such as objects that are brightly backlit, or for which the sun or other lighting is in the user’s eyes, or which appear before a visually noisy background, or otherwise are difficult to distinguish; or otherwise as described herein.

[55] As described herein, the digital eyewear can improve the user’s audio and/or visual acuity, or improve the user’s ability to see motion, in these specific activities or in these circumstances, without degrading the user’s normal ability to sense audio and/or visual information, and without interfering with the user’s normal activity. In one embodiment, the digital eyewear can operate at a relatively high frequency relative to object motion, such as about 80-150 Hz, or possibly somewhat more or less, such as over about 25 Hz. However, there is no particular requirement for any such limitation. The digital eyewear can operate at any frequency allowing the user to perform normally without degrading the user’s senses and without substantially sensory interference.

TERMS AND PHRASES

[56] The phrase “digital eyewear”, and variants thereof, generally refers to any device coupled to a wearer’s (or other user’s) input senses, including without limitation: glasses (such as those including lens frames and lenses), contact lenses (such as so-called “hard” and “soft” contact lenses applied to the surface of the eye, as well as lenses implanted in the eye) , retinal image displays (RID) , laser and other external lighting images, “heads-up” displays (HUD), holographic displays, electro-optical stimulation, artificial vision induced using other senses, transfer of brain signals or other neural signals, headphones and other auditory stimulation, bone conductive stimulation, wearable and implantable devices, and other devices disposed to influence (or be influenced by) the wearer. For example, the digital eyewear can be wearable by the user, either directly as eyeglasses or as part of one or more clothing items, or implantable in the user, either above or below the skin, in or on the eyes (such as contact lenses), or otherwise. As used herein, the phrase “digital eyewear” is not limited to visual inputs only; it can also operate with respect to audio inputs, haptic inputs, olfactory inputs, or other sensory inputs. The digital eyewear can include one or more devices operating in concert, or operating with other devices that are themselves not part of the digital eyewear.

[57] The phrase “motion blur”, and variants thereof, generally refer to artifacts of viewing objects for which there is relative motion between the user and object, in which the object appears blurred, smeared, or otherwise unclear, due to that relative motion. For example, motion blur can occur when the object and user are moving or rotating relatively quickly with respect to each other. For another example, motion blur can occur when the object is disposed in the user’s field of view other than focused upon, such as a peripheral vision field of view or a upper or lower range of the user’s field of view.

[58] The phrase “real time”, and variants thereof, generally refer to timing, particularly with respect to sensory input or adjustment thereto, operating substantially in synchrony with real world activity, such as when a user is performing an action with respect to real world sensory input. For example, “real time” operation of digital eyewear with respect to sensory input generally includes user receipt of sensory input and activity substantially promptly in response to that sensory input, rather than user receipt of sensory input in preparation for later activity with respect to other sensory input.

[59] The phrases “sensory input”, “external sensory input”, and variants thereof, generally refer to any input detectable by a human or animal user. For example, sensory inputs include audio stimuli such as in response to sound; haptic stimuli such as in response to touch, vibration, or electricity; visual stimuli such as in response to light of any detectable frequency; nasal or oral stimuli such as in response to aroma, odor, scent, taste, or otherwise; other stimuli such as balance; or otherwise.

[60] The phrase “sensory overload”, and variants thereof, generally refers to any case in which excessive volume of a sensory input (such as brightness, loudness, or another measure) can cause information to be lost due to human sensory limitations. For example, excessive luminance in all or part of an image can cause human vision to be unable to detect some details in the image. For another example, images having sensory overload can cause human vision to be unable to properly determine the presence or location of objects of interest.

[61] The phrase “cognitive overload”, and variants thereof, generally refers to any case in which excessive information provided by a sensory input can cause information to be lost due to human cognitive limitations. For example, excessive audio noise in a auditory signal, or excessive visual detail in an image can cause human senses to be unable to properly determine the presence or location of objects of interest.

[62] The phrases “sensory underload”, “cognitive underload”, and variants thereof, generally refer to any case in which inadequate volume of a sensory input can cause information to be lost due to human inability to detect that information in the presence of other sensory inputs. For example, a portion of an image that is inadequately bright (for vision), inadequately loud (for hearing), or otherwise inadequately distinguished from background, can cause human senses to be unable to properly determine the presence or location of objects of interest.

[63] The phrases “shading”, “shading/inverse-shading”, and variants thereof, generally refer to any technique for altering a sensory input, including but not limited to:

— altering a total luminance associated with an image, such as by reducing luminance at substantially each pixel in the image;

— altering a luminance associated with a portion of an image, such as by reducing luminance at a selected set of pixels in the image;

— altering a luminance associated with a portion of an image, such as by increasing luminance at a selected portion of the image, to brighten that portion of the image, to highlight a border around or near that portion of the image, to improve visibility of that portion of the image, or otherwise;

— altering a loudness associated with an auditory signal, such as by reducing loudness at substantially each portion of the auditory signal;

— altering a loudness associated with a portion of an auditory signal, such as by reducing loudness at a selected set of times or frequencies in that auditory signal;

— altering a loudness associated with a portion of an auditory signal, such as by increasing loudness at a selected set of times or frequencies in that auditory signal, to improve listening to that portion of the image, or otherwise;

— altering a selected set of frequencies associated with an image, such as to change a first color into a second color, for the entire image, for a portion of the image, or otherwise;

— altering a selected set of frequencies associated with an image, such as to provide a “false color” image of a signal not originally viewable by the human eye, such as to provide a visible image in response to an IR (infrared) or UV (ultraviolet) or other information ordinarily not available to human senses;

— altering a sensory input other than visual or auditory sensory inputs, such as reducing/increasing an intensity of a haptic input, of an odor, or of another sense.

[64] The phrases “signal input”, “external signal input”, and variants thereof, generally refer to any input detectable by digital eyewear or other devices. For example, in addition to or in lieu of sensory inputs and external sensory inputs, signal inputs can include

— information available to digital eyewear in response to electromagnetic signals other than human senses, such as signals disposed in a telephone protocol, a messaging protocol such as SMS or MMS or a variant thereof, an electromagnetic signal such as NFC or RFID or a variant thereof, an internet protocol such as TCP/IP or a variant thereof, or similar elements;

— information available to digital eyewear in response to an accelerometer, a gyroscope, a GPS signal receiver, a location device, an ultrasonic device, or similar elements;

— information available to digital eyewear in response to a magnetometer, a medical imaging device, an MRI device, a tomography device, or similar elements; or otherwise.

[65] The phrase “mobile device”, and variants thereof, generally refers to any relatively portable device disposed to receive inputs from and provide outputs to, one or more users. For example, a mobile device can include a smartphone, an MP3 player, a laptop or notebook computer, a computing tablet or phablet, or any other relatively portable device disposed to be capable as further described herein. The mobile device can include input elements such as a capacitive touchscreen; a keyboard; an audio input; an accelerometer or haptic input device; an input coupleable to an electromagnetic signal, to an SMS or MMS signal or a variant thereof, to an NFC or RFID signal or a variant thereof, to a signal disposed using TCP/IP or another internet protocol or a variant thereof, to a signal using a telephone protocol or a variant thereof; another type of input device; or otherwise. [66] The term “random”, and variants thereof, generally refers to any process or technique having a substantially non-predictable result, and includes pseudo-random processes and functions.

[67] The phrase “remote device”, and variants thereof, generally refers to any device disposed to be accessed, and not already integrated into the accessing device, such as disposed to be accessed by digital eyewear. For example, a remote device can include a database or a server, or another device or otherwise, coupled to a communication network, accessible using a communication protocol. For another example, a remote device can include one or more mobile devices other than a user’s digital eyewear, accessible using a telephone protocol, a messaging protocol such as SMS or MMS or a variant thereof, an electromagnetic signal such as NFC or RFID or a variant thereof, an internet protocol such as TCP/IP or a variant thereof, or otherwise.

[68] The phrase “user input”, and variants thereof, generally refers to information received from the user, such as in response to audio/ video conditions, requests by other persons, requests by the digital eyewear, or otherwise. For example, user input can be received by the digital eyewear in response to an input device (whether real or virtual), a gesture (whether by the users’ eyes, hands, or otherwise), using a smartphone or controlling device, or otherwise.

[69] The phrase “user parameters”, and variants thereof, generally refers to information with respect to the user as determined by digital eyewear, user input, or other examination about the user. For example, user parameters can include measures of whether the user is able to distinguish objects from audio/ video background signals, whether the user is currently undergoing an overload of audio/ video signals (such as from excessive luminance or sound), a measure of confidence or probability thereof, a measure of severity or duration thereof, other information with respect to such events, or otherwise.

[70] The phrase “visual acuity”, and variants thereof, generally refers to the ability of a user to determine a clear identification of an object in the user’s field of view, such as one or more of: — The object is presented in the user’s field of view against a background that involves the user having relatively greater difficulty identifying the object against that background. This is sometimes called “static” visual acuity herein.

— The object is moving at relatively high speed, or relatively unexpected speed, in the user’s field of view, that involves the user having relatively greater difficulty identifying a path of the object. This is sometimes called “dynamic” visual acuity herein.

— The object is presented in the user’s field of view at an angle, such as a peripheral vision angle or another non-frontal visual angle, that involves the user having relatively greater difficulty identifying the object. This is sometimes called “peripheral” visual acuity herein.

— The object is in motion with respect to the user, such as objects that are moving directly toward or away from the user, or objects that are moving in a region of the user’s peripheral vision.

— The object is located poorly for viewing with respect to a background, such as an object that is brightly backlit, or for which the sun or other lighting is in the user’s eyes, or an object which appears before a visually noisy background, or otherwise is difficult to distinguish.

[71] In one embodiment, the phrase “improving visual acuity”, and variants thereof, generally refers to improving the user’s audio and/or visual acuity, or improving the user’s ability to see motion, without degrading the user’s normal ability to sense audio and/or visual information, and without interfering with the user’s normal sensory activity. For example, as described herein, when the user’s visual acuity is improved, the user should still be able to operate a vehicle, such as driving a motor vehicle or piloting an aircraft, or operating another type of vehicle.

FIGURES AND TEXT

Example digital eyewear

[72] Fig. 1 shows a conceptual drawing of an example digital eyewear system. System including digital eyewear

[73] A system 100, such as operated with respect to a user 101 and with respect to an object 102 in the user’s field of view 103, is described with respect to elements as shown in the figure, and as otherwise described herein, such as:

— digital eyewear 110, including one or more lenses 111, at least one eye-tracking element 112, at least one object- tracking element 113, and possibly other elements;

— a computing device 120, including at least one processor 121, program and data memory 122, one or more input/output elements 123, and possibly other elements;

— a communication system 130, including at least one communication device 131 and at least one remote device 132 (such as a database, server, a second digital eyewear, and possibly other elements).

[74] In one embodiment, the user 101 can include one or more natural persons, operating individually or cooperatively, with or without assistance from an ML (machine learning) or Al (artificial intelligence) technique, and with or without assistance from another software element. Alternatively, the user 101 can include one or more software elements, disposed to perform functions as further described herein with respect to the user.

[75] In one embodiment, the digital eyewear 110 can be disposed to include eyewear or associated optical systems, such as glasses or sunglasses, contact lenses, goggles, facemasks or helmets, or other eyewear. For example, the digital eyewear 110 can include glasses having lenses operating under control of a computing device 120, in which the glasses include lenses 111 that can be controlled by the computing device. In such cases, the lenses 111 can have a corrective lens effect, such as using refraction to correct for myopia, presbyopia, astigmatism, or otherwise. Alternatively, the lenses 111 can include a shading /inverse -shading element, whether additional to corrective lenses or not, and disposed in line between the user’s eye(s).

[76] In one embodiment, the object 102 can include

— any moving or still object, such as a ball, a flying or rolling object, an animal or person (such as a person being searched for during a search/rescue operation, or such as a species of animal or bird being observed by a user), or another type of moving or rotating object;

— any collection of related moving or still objects, such as a cloud, a flock of birds, a moving animal or person, a crowd of animals or people, or another type of collection, in which the moving objects can be moving linearly or rotating, or a combination thereof;

— any image displayed on an object, such as a billboard or imaging display, a presentation, an advertisement, or another information display (whether that image is still or moving, or whether that image is complex or confusing);

— any element of terrain, such as a road, a road sign, a median or divider, a traffic control or traffic light, a tunnel entrance or exit, a wall, another vehicle, a parking spot, or another terrain element;

— any element used in an activity, such as a firefighting or search/ rescue activity, a law enforcement or military activity, a sports activity, or another activity; or otherwise.

Distinction from background

[77] As further described herein, any moving object, whether moving with respect to the earth or moving with respect to the user 101 as an observer, can present the possibility of sensory or cognitive overload. For example, a moving object can include an object moving linearly or rotating, or a combination thereof, with respect to a user. This can sometimes be observed as blur, in cases in which human perception of the object is inadequate to provide sufficient information for a sharp image. Sensory or cognitive overload can occur for one or more of:

— objects that are moving or located in a part of the user’s field of view 103 that has a background that makes it difficult to identify the object, such as when the object has a bright light behind it (such as the sun or a field light), or when the object has a complex scene behind it (such as a stadium audience);

— objects that are moving quickly with respect to the user 101, particularly when moving in the user’s line of sight, such as directly toward the user 101;

— objects that are moving or located in a part of the user’s field of view 103 that has lesser resolution, such as the user’s peripheral vision or another non-frontal portion of their field of view, or at substantial distance (particularly when the user is nearsighted) ; or otherwise.

[78] As further described herein, objects that are inadequately distinguished from background, whether because the object is similar to the background, or because the background includes excessive information, can present the possibility of sensory or cognitive overload. This can sometimes be observed as camouflage, or a variant thereof, in cases in which human perception of the object is inadequate to provide sufficient information for a sharp image. This can also sometimes be observed when the object is too small, too indistinct, too undefined, or otherwise too difficult to detect, with respect to the background. Sensory or cognitive overload can occur for one or more of:

— objects that are observed against a background that has excessive noise, whether auditory, visual, or otherwise;

— objects that are observed against a background that has information that attracts the attention of the user 101, such as advertising, communication, or otherwise;

— objects that are observed against a background that has inadequate differences from the objects themselves, such as when objects are deliberately or by happenstance marked / colored similarly to their environment;

— objects that are too small with respect to a size of a region of the background the user observes, such as insects, small objects, or otherwise; or otherwise.

[79] As further described herein, objects can also be inadequately distinguished from background due to the user’s attentive or cognitive limitations, such as:

— when the user is attentive to other objects, such as when those other objects are prominent and the particular object to be distinguished is not as prominent, or such as when the user is distracted by other activity in the background;

— when the user does not expect the object to be present, such as when the object is relatively unlikely to be present and the user is thus inattentive to its presence, or such as when the object is not available for view long enough for the user to notice it;

— when the object appears or disappears relatively suddenly, such as when the object is relatively hidden until it appears in a location the user does not expect, or such as when an actor in the user’s field of view deliberately hides the object until revealing it; — when the object is located in a portion of the user’s field of view for which the user has a lesser degree of natural visual acuity, such as a peripheral vision section of the user’s field of view, or such as another non-frontal visual section of the user’s field of view; or otherwise.

Relative motion blur

[80] In one embodiment, the digital eyewear 110 can identify an object subject to motion blur. Motion blur can result from the object moving or rotating, or a combination thereof, or from the user 101 moving or rotating, or a combination thereof. The digital eyewear 110 can provide a modified set of sensory inputs with respect to the object, so as to eliminate, or at least mitigate, motion blur. For example, the digital eyewear 110 can provide sensory inputs that include a sequence of still images of the object, or a sequence of short video image of the object. The user 101 can more easily identify the object in response to the sequence of those individual images, and can more easily identify the speed, direction, size, and possibly rotation, of the object in response thereto.

[81] To identify an object possibly subject to motion blur, the digital eyewear 110 can use an ML (machine learning) or Al (artificial intelligence) technique to receive the external sensory inputs, and process those sensory inputs substantially in real time. For example, the ML or Al technique can include an image recognition system tuned to one or more objects of particular interest to the user 101; the ML or Al technique can thus identify those objects. In response thereto, the digital eyewear 110 can adjust the external sensory inputs to make the objects more prominent to the user 101. In such cases, the digital eyewear 110 can shade /inverse- shade the objects so as to increase their contrast against the background in the user’s field of view. In other such cases, the digital eyewear 110 can alter the coloring of the objects so as to increase their contrast against the background, or otherwise decrease the cognitive load on the user 101 to identify the objects.

[82] Alternatively, the digital eyewear 110 can receive a signal, such as a radio signal or other electromagnetic signal, or such as an ultrasonic signal, or another type of signal, from the object, so as to identify the objects of particular interest to the user 101. For example, the object can include a baseball including an internal transmitter, emitting an identifiable signal. The signal can possibly be encrypted so as to allow only selected sets of digital eyewear 110 (such as only digital eyewear 110 assigned to members of a selected team) to identify it.

[83] Alternatively, the digital eyewear 110 can emit a signal, such as a radio signal or other electromagnetic signal, or such as an ultrasonic signal, or another type of signal, and obtain a reflection thereof from the object, so as to identify the objects of particular interest to the user 101. For example, when the object includes a baseball, the digital eyewear 110 can emit an ultrasonic signal and obtain a reflection indicating a location, speed, direction, and possibly rotation, of the object. As further described herein, the signal can possibly be encrypted so as to allow only selected sets of digital eyewear 110 (such as only digital eyewear 110 assigned to members of a selected team) to identify it.

Peripheral vision blur

[84] Alternatively, the digital eyewear 110 can identify a particular portion of the background, such as a peripheral vision part or other non-frontal visual portion of the user’s field of view, for which the user 101 would be cognitively overloaded when in motion and viewing objects against that background. The digital eyewear 110 can determine a speed at which the user 101 is traveling, thus identifying an amount of cognitive overload due to use of peripheral vision against that portion of background. The digital eyewear 110 can thus provide modified sensory inputs to the user 101 so as to reduce the user’s cognitive overload.

[85] Alternatively, the digital eyewear 110 can identify a particular portion of the background, such as a peripheral vision part or other non-frontal visual portion of the user’s field of view, for which the user 101 would have a lesser ability to perceive objects, or details of objects. The digital eyewear 110 can determine a portion of the user’s field of view in which the object appears, such as a peripheral vision portion or other non- frontal visual portion of the user’s field of view. The digital eyewear 110 can provide modified sensory inputs to the user 101 so as to enhance the user’s peripheral visual acuity.

Cognitive distraction

[86] Alternatively, the digital eyewear 110 can identify one or more selected objects in the background that are substantially irrelevant to the user’s focused-upon object, and can edit out those substantially irrelevant objects from the background before presenting a field of view to the user 101. This can have the effect that the user can (cognitively) focus upon those objects of particular interest, while substantially ignoring those objects not of interest.

[87] For example, in a sports context, a user 101 who is a participant can focus on a baseball (so as to catch it) and can have audience activity and billboard advertisements edited out of the background they perceive, so as to allow the user to more easily (cognitively) focus upon the baseball. For another example, in a law enforcement context, a user 101 who is a law enforcement officer can have extraneous vehicular motion edited out of the background they perceive, so as to allow the user to focus on a suspect who might be drawing a firearm.

[88] For example, in a racing context, a user 101 who is driving a racing car can focus on the road and possible obstacles thereon, and can have glare and excessively bright or otherwise distracting light sources edited out of the background they perceive, so as to allow the user to drive more effectively and safely, and so as to enhance the user’s visual acuity. For example, oncoming bright lights can be edited out so as to allow the user improved visual acuity to one or more sides, such as traffic that parallels or is merging with the user.

[89] For example, in a medical context, a user 101 who is subject to epilepsy or PTSD can have triggering stimuli modified in the background they perceive, so as to allow the user to engage with their field of view with a substantially lesser risk of their medical condition being triggered. In such cases, in the case of epilepsy, the digital eyewear 110 can modify the sensory inputs in the user’s field of view so as to remove light at frequencies deemed likely to trigger a seizure. Alternative, in such cases, in the case of PTSD, the digital eyewear 110 can modify the user’s audio sensory inputs so as to remove excessively loud or surprising sounds, automobile engine backfires and other sounds similar to gunfire, or other audio /video sensory inputs deemed likely to trigger a flashback or other ill effects of PTSD.

Modified sensory inputs

[90] In one embodiment, the digital eyewear 110 can receive external sensory inputs, process those sensory inputs substantially in real time, and provide modified versions of those sensory inputs to the user 101, so as to allow the user to obtain a better view of those objects than would be provided due to sensory or cognitive overload, and so as to enhance the user’s visual acuity. The digital eyewear 110 can use the computing device 120 to select, in real time, portions of the external sensory inputs to provide to the user 101. The portions of the external sensory inputs can be selected so as to be relatively easy for the user 101 to process in real time, so as to not be subject to sensory or cognitive overload, and so as to enhance the user’s visual acuity.

[91] In one embodiment, the digital eyewear 110 can receive external sensory inputs with respect to a moving object, select a sequence of still images of that moving object, and present only that sequence of still images (not the entire moving image) to the user 101 for observation. The user 101 can process each such still image so as to obtain a better observation of the moving object, and can process the sequence of such still images, wherein the brain integrates the images so as to obtain a continuous view of the object’s motion. This can have the effect of reducing any sensory or cognitive overload on the user 101, and so as to enhance the user’s visual acuity.

[92] For example, the digital eyewear 110 can present a baseball moving at 100 miles/hour (approximately 44.7 meters/ second) to the user 101 as a sequence of still images that are 10 milliseconds apart. In such cases, the baseball moves approximately 1.47 feet from each such still image to the next. The user 101 can relatively easily detect the baseball in each such still image, and can relatively easily determine the motion of the baseball from the change in its location from each such still image to the next one. Alternatively, a different selection of timing for each such still image can be used. In a preferred embodiment, the selected timing can be such that the user’s sensory or cognitive overload is minimized, or the user’s visual acuity is maximized.

[93] For another example, the digital eyewear 110 can present the same baseball to the user 101 in short real time moving images that are 1 millisecond long and 10 milliseconds apart. Thus, the user 101 would see only about 10% of the actual motion of the baseball. In such cases, the user 101 can relatively easily detect the baseball in each such short real time moving image, and can relatively easily determine a speed and direction of the baseball from each such short real time moving image. Alternatively, a different selection of timing for each such short real time moving image can be used. The fraction of the complete moving image can be larger or smaller, and the duration of each such short real time moving image can be longer or shorter. Similar to the earlier example, in a preferred embodiment, the selected timing can be such that the user’s sensory or cognitive overload is minimized, or the user’s visual acuity is maximized.

[94] In one embodiment, the digital eyewear 110 can receive external sensory inputs with respect to one or more moving objects that the user 101 is not intending to focus upon, select one or more filters to reduce those objects in intensity or prominence, and present a modified field of view to the user. This can have the effect that the user 101 can engage with the modified field of view, allowing the user 101 to avoid being distracted or otherwise cognitively overloaded by the presence of those objects, and so as to enhance the user’s visual acuity.

[95] For example, the digital eyewear 110 can receive external sensory inputs including a flashbang grenade, filter the background to remove the intensity (or even the entire presence) of that grenade therefrom, and present a field of view to the user 101 that avoids the sensory and cognitive overload of that grenade. For another example, the digital eyewear 110 can receive external sensory inputs including other excessively bright light sources, such as roadway lamps or such as the sun upon exit from a darkened tunnel, and can shade those light sources or filter them to reduce the intensity of their color, so as to allow the user 101 to drive a vehicle at rapid speed with relative effectiveness and safety, and with enhanced visual acuity upon entrance and exit from the tunnel.

[96] In one embodiment, the user’s field of view 103 can include any area within sight or possibly within sight of the user 101, whether or not easily discernable to the user. For example, the user’s field of view 103 can include a frontal field of view, a peripheral field of view, an upward / downward field of view, a reflection from a reflective surface, another viewable element, or otherwise.

[97] In such cases, when the user’s field of view 103 includes a field of view in which the user 101 has a naturally lessened visual acuity, such as a peripheral portion of the user’s field of view or another non-frontal portion of the user’s field of view, the digital eyewear 110 can present external sensory inputs to the user 101 so as to reduce the sensory or cognitive overload on the user’s peripheral (or otherwise non-frontal) portion of their field of view. This can have the effect of improving the user’s visual acuity in portions of their field of view with an otherwise naturally lessened visual acuity.

Lenses

[98] In one embodiment, the lenses 111 can include one or more lenses 111 disposed to be coupled to a carrier, such as an eyeglass frame or otherwise disposed near the user’s eye(s). Alternatively, the lenses 111 can include one or more lenses 111 disposed to be coupled to the user’s eye(s), such as contact lenses, implantable lenses, or other techniques with respect to detecting or altering external sensory input directed to the user’s eye(s). For example, the lenses 111 can include an RID (retinal image display) a holographic display, a binocular or monocular imaging system, or a closed-circuit camera and television display system.

[99] In one embodiment, the lenses 111 can include any other technique for receiving external sensory input (audio/ video or otherwise), for coupling that external sensory input to the computing device 120 to generate processed sensory input, and for providing that processed sensory input to the user 101. For example, the lenses 111 can include a first (real- world facing) lens I l la disposed to receive the external sensory input, and a second (user-facing) lens 111b disposed to provide the processed sensory input to the user 101. A shading/ inverse-shading element 111c can be disposed between the real-world facing lens I l la and the user-facing lens 11 lb.

[100] In one embodiment, the lenses 111 can include any real-world receiving device (such as the real- world facing lens I l la), shading/ inverse -shading device (such as the shading /inverse -shading element 111c), and user presentation device (such as the userfacing lens 111b). For example, the shading/ inverse -shading element 111c can include the computing device 120 and associated software elements disposed to perform shading/ inverse -shading on external sensory inputs, other shading/ inverse -shading elements 111c disposed logically between the real-world receiving device and the user presentation device, or otherwise.

[101] In one embodiment, the lenses 111 can be coupled to a carrier 114, such as an eyeglass frame, a face mask, a pince-nez, a set of ski goggles or other eye protectors, another device disposed to be coupled to the user’s face, or otherwise. In such cases, the carrier 114 can be disposed to support the eye-tracking element 112, the objecttracking element 113, the computing device 120, or the communicate system 130, or other elements.

[102] Alternatively, the lenses 111 can include contact lenses, implantable lenses (such as a replacement for the user’s natural eye lenses), or other elements capable of performing the functions described herein. In such cases, contact lenses can include one or more identifiable points, such as a pattern or set of spots that reflect IR (infrared) or other frequencies, are phosphorescent with respect to IR or other frequencies, are electrostatically or electromagnetically coupled to a detector, are otherwise disposed to detect the user’s eye gaze direction, or otherwise. When the lenses 111 include contact lenses, implantable lenses, or otherwise, the digital eyewear 110 can determine eye gaze direction with respect to a location of the lenses 111, and can present results of processing external sensory inputs using another technique, such as an RID (retinal image display) . [103] Alternatively, one or more of the digital eyewear’s elements can be coupled to, or implemented using, a mobile device (not shown), such as a smartphone, iPod™, iPad™, laptop, wearable or implantable device, another device having the functions described herein, or otherwise.

[104] The lenses 111 can include right and left lenses 111, such as disposed with respect to the user’s right eye and left eye (not shown), or with respect to a right and left portion of the user’s field of view 103. Effects applied to the user’s field of view 103 can be divided and separately applied with respect to the right and left lenses 111 or can be further divided and separately applied with respect to smaller elements, such as individual pixels 104.

[105] The lenses 111 can also include forward and peripheral elements, such as disposed with respect to the forward and peripheral areas of the user’s field of view 103. Similarly, the lenses 111 can also include central and peripheral elements, such as disposed with respect to the central and peripheral areas of the user’s field of view 103. The forward and peripheral areas of the user’s field of view 103, or the central and peripheral vision areas of the user’s field of view 103, can be further divided with respect to smaller elements, such as individual pixels 104.

[106] In one embodiment, the lenses 111 can also use effects applied to the right and left lenses 111, or to individual pixels 104, to provide images to the user’s vision. Images provided to the user’s vision can include images to be overlaid with natural sensory inputs. Images provided to the user’s vision can also include effects to be applied to natural sensory inputs, such as shading/inverse-shading effects, color filtering effects, polarization effects, frequency-altering effects, false -coloring effects, other effects, and otherwise.

Eye-tracking, object-tracking, and other elements

[107] In one embodiment, the eye-tracking element 112 can include one or more cameras directed inward toward the user’s eyes. The inward-directed cameras can be disposed to identify one or more elements of the user’s eyes, such as the pupils, irises, sclera, eyelids, tear ducts, orbital bones or other facial features.

[108] For example, the cameras can be disposed to determine in what direction the user’s eye gaze is directed, such as in response to a location of the pupils, irises, sclera, or otherwise. Similarly, the cameras can be disposed to determine in what direction the user’s eye gaze is directed, such as in response to the position of the pupils, irises, or otherwise, with respect to the sclera, eyelids, tear ducts, orbital bones, or other facial features.

[109] For another example, the cameras can be disposed to determine at what distance the user is focusing their vision, such as in response to a focal length, pupil width, pupillary distance, or otherwise.

[110] In one embodiment, the object-tracking element 113 can include one or more cameras directed outward toward an object in a gaze direction of the user’s eyes. For example, the object can be a stationary object (although the user 101 can be moving or rotating with respect to the stationary object) or can be a moving object. The outward- directed cameras can be disposed to identify one or more designated types of objects, such as a playing piece or other sports equipment (such as with respect to a sports applications), a person or item of equipment (such as with respect to firefighting, police, search and rescue, or military applications), a vehicle (such as with respect to traffic applications), a friend or other person with which the user is conversing (such as with respect to social applications), an object that can be the subject of commerce (such as with respect to commerce applications), or otherwise.

[111] In one embodiment, the computing device 120 can be disposed to receive information from the eye-tracking element 112 and from the object-tracking element 113. The computing device 120 can also be disposed to exchange information with the program and data memory 122 and with one or more of the input/output elements 123. For example, the computing device 120 can be disposed to receive information from the user 101 by the latter manipulating one or more of the input elements, and can be disposed to provide information to the user 101 by the computing device 120 controlling one or more of the output elements.

[112] The input/ output elements 123 can include one or more buttons, capacitive sensors, dials, or other input devices disposed to be manipulated by the user 101. The input/output elements 123 can also include one or more audio/video output elements, such as capable of presenting sound or video to the user 101 using one or more speakers, lights, controls coupled to one or more lenses 111, retinal input displays, or other or other audio/video elements. The output elements can also include one or more other elements disposed to be sensed by the user 101, such as haptic elements (buzzers, pressure elements, vibration elements, or otherwise), electric charges or other devices disposed to trigger feeling on the user’s skin, or otherwise.

[113] The computing device 120 can also be disposed to exchange information with one or more remote devices 132, such as using the one or more communicate devices 131. The computing device 120 can be disposed to perform one or more functions in response to the program and data memory 122 with respect to information it receives from other devices, and can be disposed to send information to other devices in response to one or more of those functions.

[114] The computing device 120 can also be disposed to be coupled to remote device 132 that also can perform one or more computing functions, such as a database capable of maintaining information, a server capable of receiving requests and providing responses thereto, or one or more other digital eyewear. For example, a first digital eyewear 110 can communicate with a second digital eyewear 110, such as to communicate between a first user 101 and a second user 101, or such as to provide joint operations between more than one such digital eyewear 110.

Mobile device

[115] For example, a mobile device 140 can perform the functions described herein with respect to the lenses 111 using one or more of its cameras or microphones as real- world facing lenses I l la and using its presentation display or speaker as user-facing lenses 111b. Similarly, the mobile device 140 can perform the functions described herein with respect to the computing device 120 using one or more of its processors, can perform the functions described herein with respect to the communicate device 130 using its communication capability, or otherwise. The mobile device 140 can also couple one or more of the digital eyewear’s elements using its communication capability to couple those elements.

[116] For another example, the mobile device 140 can perform the functions described herein with respect to the eye-tracking element 112 using one or more of its (user-facing) cameras, can perform the functions described herein with respect to the object-tracking element 113 using one or more of its (real-world facing) cameras, or otherwise. Similarly, the mobile device 140 can perform the functions described herein with respect to the input/output elements 123 using a capacitive touch screen or microphone (as input elements), using a presentation display or speaker (as output elements), or otherwise.

[117] As further described herein, the shading/ inverse -shading elements can include any device suitable to perform functions described herein, including one or more visual effects that can be imposed by the computing device 120, such as shading (with respect to total luminance or with respect to particular frequencies), polarization (also with respect to total luminance or with respect to particular frequencies), filtering (with respect to time-varying elements of total luminance or particular frequencies), other effects described herein, and otherwise.

Example inputs including sensory overload

[118] Fig. 2 (collectively including fig. 2A-2B) shows a conceptual drawing of example sensory inputs including possible sensory overload or perceptual noise. Fig. 2A shows a conceptual drawing of an example user viewing a moving object with sensory or cognitive overload. Fig. 2B shows a conceptual drawing of an example user viewing an object in response to acoustic recognition of the object. Viewing moving objects with sensory or cognitive overload

[119] Fig. 2A shows a conceptual drawing of an example user viewing a moving object with sensory or cognitive overload.

[120] In one embodiment, the user 101 can be disposed to view one or more objects in a user’s field of view. For example, the objects can be moving, possibly at a speed that demands greater sensory or cognitive effort than would ordinarily be required of the user 101 when viewing those one or more objects. The user’s ability to accurately or distinctly view those one or more objects, while moving, is sometimes herein called “dynamic visual acuity” with respect to those objects, or with respect to their motion.

[121] For another example, the objects can also be still, but possibly be presented against a background that makes it difficult to distinguish the object, thus (again) demanding more sensory or cognitive effort than would ordinarily be required of the user 101 when viewing those one or more objects, thus possibly having the effect of reducing the user’s visual acuity with respect to those objects. The user’s ability to accurately or distinctly view those one or more objects against a background, even though still, is sometimes herein called “static visual acuity” with respect to those objects, or with respect to the background.

[122] For another example, the (one or more) objects can also be presented at an angle or in the user’s peripheral vision, thus (again) demanding more sensory or cognitive effort than would ordinarily be required of the user 101 when viewing those objects, thus possibly having the effect of reducing the user’s visual acuity with respect to those objects. The user’s ability to accurately or distinctly view those objects against a background, even though still, is sometimes herein called “peripheral visual acuity” with respect to those objects, or with respect to the angle at which they are presented.

[123] In one embodiment, the user 101 can be disposed to view a (possibly moving) object 211, with respect to a (possibly confusing) background 212, or with respect to a (possibly substantially non-frontal) angle. [124] For example, the object 211 can include a baseball or other sports object, the background 212 can include a sky or a sports stadium, and the angle can include a direction with respect to the user 101. The sky or sports stadium can include a light source 213 that provides backlighting to the object 211, such as the sun or stadium lighting, and can include signs or other distractions 214 in the stadium or the audience. In such cases, the backlighting or the distractions can degrade the user’s ability to see the object 211 with adequate visual acuity, such as by imposing sensory overload (possibly due to excessive brightness from the sun or stadium lighting) or cognitive overload (possibly due to confusing inputs from the distractions) .

[125] When the sports stadium is substantially enclosed, such as when it has with a set of observer locations or seats disposed under a physically supportable roof, the roof can be supplemented with one or more layers of light-altering elements. For example, the roof can be supplemented with one or more polarizing layers, so as to reduce the effect of glare from sunlight (either direct sunlight or sunlight reflected from a cloud layer). For another example, the roof can be supplemented with one or more shading/in- verse-shading layers, so as to reduce a degree of brightness from the sky or from outside lighting.

[126] For another example, the object 211 can possibly be moving at a high speed with respect to the user 101, or at a speed unexpected by the user, such as when a baseball or other sports object is suddenly directed at the user. The rapid or unexpected movement of the object 211 can degrade the user’s ability to see the object 211 (or its relative movement) with adequate visual acuity, such as by imposing sensory overload (possibly due to rapid movement) or cognitive overload (possibly due to unexpected movement).

[127] For another example, the object 211 can possibly be moving at an angle with respect to the user 101 for which the user has lesser natural visual acuity, such as a peripheral vision angle, or more generally, any non-frontal visual angle. The movement of the object 211 at a peripheral vision angle or another non-frontal visual angle can degrade the user’s visual acuity, such as by imposing sensory overload or cognitive overload (possibly due to the lesser natural visual acuity the user 101 might have with respect to that angle).

[128] In one embodiment, the user 101 can be disposed to view the object 211 when the light source 213 is moving with respect to the user 101 or with respect to the object 211, or when one or more reflective surfaces provides glare or reflections with respect to the user’s field of view 103 or with respect to the light source 213. For example, movement with respect to an angle of the object 211 and the light source 213 can change shadows cast by or on the object, or can otherwise change the user’s viewable image of the object.

Multiple still images

[129] In one embodiment, the digital eyewear 110 can provide shading/ inverse-shading with respect to the image of the object 211. The digital eyewear 110 can provide a sequence of still images 215a of the object 211 in lieu of a continuous moving image 215b of the object 211. For example, the digital eyewear 110 can provide a sequence of still images 215a, one for each foot of movement of the object 211. This can provide advantages with respect to backlighting, distractions, and blurring of the continuous moving image 215b.

[130] In one embodiment, the digital eyewear 110 can independently shade /inverseshade each such still image 215a with respect to the particular interaction between the user 101, the object 211, the background 212, the light source 213, and any distractions 214. For example, the digital eyewear 110 can independently detect, for each such still image 215a, (A) an amount of contrast between the object 211 and the background 212, (B) an amount of sensory overload due to excessive lighting or glare from the light source 213, (C) an amount of cognitive overload due to the image of the object 211 with respect to any distractions 214, or otherwise. This can have the effect that the digital eyewear 110 can independently provide each such still image 215a with an optimal amount of shading/ inverse-shading. [131] In one embodiment, the user 101 can detect the object 211, due to identifying the object by eye against the background 212. The digital eyewear 110 can therefore shade /inverse -shade the portion of the background 212 distant from the object 211, or can shade /inverse -shade the lighting source 213 separately from the object 211. For example, the digital eyewear 110 can shade /inverse -shade the entire background 212 other than a portion of the user’s field of view 103 near the object 211. For another example, the digital eyewear 110 can shade /inverse -shade the user’s entire field of view 103, and only expose the object 211 when a time for the still image 215a occurs.

[132] In such cases, due to the user’s ability to detect the object 211 by identifying it by eye against the background 212, the digital eyewear 110 can reduce the relative contrast between the object and its background in the user’s field of view 103, so as to improve the user’s visual acuity with respect to the object. Moreover, the digital eyewear 110 can perform object recognition with respect to the object 211, determine an amount of relative contrast between the object and its background 212, and adjust an amount of shading/ inverse-shading in response thereto, also so as to improve the user’s visual acuity with respect to the object.

[133] In such cases, the user 101 should see a sequence of such still images 215a, tracking motion of the object 211 along the path it would follow with respect to the continuous moving image 215b. The user 101 should see each such still image 215a and be able to track the motion of the object 211 as if they were viewing the continuous moving image 215b, but with the digital eyewear 110 performing shading/ inverse-shading independently for each such still image 215a. This can have the effect that the user 101 can view the object 211 as well as if they were viewing the continuous moving image 215b, using the sequence of the still images 215a in lieu thereof, thus improving the user’s visual acuity with respect to the object.

[134] In one embodiment, the digital eyewear 110 can provide a sequence of such still images 215a at a relatively high frequency relative to motion of a selected object, such as about 80-150 Hz, possibly somewhat more or less, or possibly another frequency more than about 25 Hz. For example, when presenting a baseball, the digital eyewear 110 can provide one still image 215a for each 10 milliseconds of motion, thus providing a sequence of such images at 100 Hz. While tracking the object to be presented, the digital eyewear 110 can determine its velocity and adjust the frequency at which it provides the still images 215a so as to optimize a user’s visual acuity.

[135] For example, when presenting a baseball in a baseball game, the digital eyewear 110 can determine a velocity of the baseball relative to the user and adjust the frequency at which it provides still images 215a in response thereto. When the baseball is moving toward the user at 100 miles/hour (thus, about 44.7 meters/ second), the digital eyewear 110 can present the baseball to the user at about 90 Hz. This can have the effect that the user sees the still images 215a in a sequence about 0.50 meters apart.

[136] In such cases, when the baseball is moving across the user’s field of view, but not directly toward or away from the user, the digital eyewear 110 can provide a sequence of still images 215a at a different frequency, possibly slower or faster, when that would help the user see the baseball with better visual acuity.

[137] For another example, the digital eyewear 110 can provide a still image 215a showing the baseball in relatively high contrast with its background. When the baseball is travelling toward the user and is backlit by the sun, the digital eyewear 110 can shade /inverse -shade the still images 215a so as to help the user see the baseball with better visual acuity. In such cases, the digital eyewear 110 can shade the backlighting within the still images 215a so as to reduce its brightness or glare and can decline to shade the baseball so as to allow the user to see it clearly.

[138] In one embodiment, the digital eyewear 110 can alternate presentation of the object to the user’s distinct eyes. For example, the digital eyewear 110 can present every even-numbered still image 215a to the user’s left eye and can present odd-numbered still images 215a to the user’s left eye. For another example, the digital eyewear 110 can be disposed to select each still image 215a for presentation to only one of the user’s eyes, randomly with each eye having a probability of 0.5. This can have the effect that the digital eyewear 110 would present about one-half of all such still images 215a to the user’s left eye and about one-half to the user’s right eye. [139] In one embodiment, the digital eyewear 110 can determine a direction from which the object is moving and can be disposed to provide a greater fraction of such still images 215a to the user’s eye that is better positioned to see the object. For example, if the object is moving toward the user from the user’s right, the digital eyewear 110 can be disposed to continue to select each still image 215a at random for presentation to only one of the user’s eyes. However, in such cases, the digital eyewear 110 can adjust the probabilities it uses so that the user’s better positioned eye gets much more than half of the still images 215a.

Multiple moving images

[140] In one embodiment, the digital eyewear 110 can provide a sequence of short real time moving images 215c of the object 211 in lieu of a continuous moving image 215b of the object 211. For example, the digital eyewear 110 can provide a sequence of still images 215c in each one of which the object 211 has about a foot of movement. This method can also provide advantages with respect to backlighting, distractions, and blurring of the continuous moving image 215b, thus improving the user’s visual acuity with respect to the moving image of the object.

[141] For example, the digital eyewear 110 can present the same baseball to the user 101 in short real time moving images that are about 1 millisecond long and 10 milliseconds apart. Thus, the user 101 would see only about 10% of the actual motion of the baseball. In such cases, the user 101 can relatively easily detect the baseball in each such short real time moving image 215c, and can relatively easily determine a speed and direction of the baseball from each such short real time moving image 215c. Alternatively, a different selection of timing for each such short real time moving image 215c can be used so as to improve the user’s visual acuity with respect to the moving object 211. The fraction of the complete moving image can be larger or smaller, and the duration of each such short real time moving image 215c can be longer or shorter, such as in response to ambient conditions of lighting or other factors.

[142] Moreover, the selection of timing for each such short real time moving image 215c, including their duration and fraction of the continuous moving image 215b, can allow the user 101 to more easily detect the speed, direction, rotation, and other movement effects of the object, thus improving the user’s visual acuity with respect to the moving object 211. For example, when the object is a baseball, the user 101 can observe a short linear movement, as opposed to a longer and possibly curved movement. This can have the effect that the user 101 can more easily detect the speed and direction of the baseball at each moment of its path, with the effect that the user can more easily position themselves to catch the baseball (if the user is a fielder) or hit the baseball (if the user is a batter).

[ 143] Similar to the description with respect to still images 215a, the digital eyewear 110 can be disposed to present the moving images 215c at a selected frequency and with a selected contrast. For example, the digital eyewear 110 can be disposed to present the moving images 215c at a relatively high frequency with respect to motion of a selected object, such as about 80-150 Hz, or another frequency described herein. For example, as described herein, when presenting a baseball, the digital eyewear 110 can provide moving images 215c that are each about 1 millisecond long and about 10 milliseconds apart, thus providing a sequence of such moving images at 100 Hz.

[ 144] Similar to the description with respect to still images 215a, the digital eyewear 110 can be disposed to present the moving images 215c at a frequency that is selected in response to velocity of a moving object. For example, whether a baseball is moving toward the user at high speed, or whether the baseball is moving across the user’s field of view at a different speed, the digital eyewear 110 can be disposed to select a frequency that optimizes the user’s visual acuity for that object.

[145] Similar to the description with respect to still images 215a, the digital eyewear 110 can be disposed to use shading/ inverse -shading to provide the moving images 215c showing the baseball in relatively high contrast with its background. Similar to the description with respect to still images 215a, when the baseball is travelling toward the user and is backlit by the sun, the digital eyewear 110 can shade/inverse-shade the moving images 215c so as to help the user see the baseball with better visual acuity. In such cases, the digital eyewear 110 can shade the backlighting within the moving images 215c so as to reduce its brightness or glare and can decline to shade the baseball so as to allow the user to see it clearly.

[146] Similar to the description with respect to still images 215a, the digital eyewear

110 can be disposed to alternate presentation of the moving images 215c to the user’s right and left eyes. For example, the digital eyewear 110 can present every even-numbered moving image 215c to the user’s left eye and can present odd-numbered moving images 215c to the user’s left eye. Alternatively, the digital eyewear 110 can select each moving image 215c for presentation to only one of the user’s eyes, randomly with each eye having a probability of 0.5. Alternatively, the digital eyewear 110 can select each moving image 215c for presentation randomly with a different probability as adjusted for the direction the user is looking.

Alternating lens shading

[147] In one embodiment, the digital eyewear 110 can provide alternating shading/ inverse -shading between two (or more) lenses for the user’s eyes. For example, the digital eyewear 110 can completely blank out the user’s right lens 111 while leaving the user’s left lens 111 clear, alternating with completely blanking out the user’s left lens

111 while leaving the user’s right lens 111 clear. This can have the effect that the user 101 alternatively see out of each one eye but not their other eye. For real-time motion use, the digital eyewear 110 can operate at a speed which equates to the user’s cognitive threshold, the speed of alternating the shading/ inverse -shading of the left and right lenses at which the user does not discern any loss of visual information and can see objects with high relative motion with improved visual acuity. This speed for most humans is above 90Hz, or about 5 milliseconds or less per lens, when two lenses are operating in tandem. A preferred shading speed, shading waveform, shading amount, and other parameters can be determined subjectively in response to user’s comprehension of the motion to be perceived, such as in response to a user input. A preferred shading speed, shading waveform, shading amount, and other parameters can also be determined objectively, such as using using motion or dynamic visual acuity devices; such as devices that present motion sequences to the user and require the user to respond to show that the user comprehends the motion sequence with adequate visual acuity. Alternative types of shading

[148] Alternatively, the shading/inverse-shading can be provided with a different amount of shading/inverse-shading other than 100%/0%, and alternatively, the shading/inverse-shading can be provided with a different amount of emphasis on the user’s right eye or left eye. For example, when the object is moving in a peripheral portion of the background (or a peripheral portion of the user’s field of view), the shading/inverse- shading can be provided with respect to that portion of the background so as to allow the user 101 to more easily see the object there. Similarly, when the object is moving in a peripheral portion of the background, the shading/inverse-shading can be provided to prompt the user 101 to look in that direction.

[149] Alternatively, the shading/inverse-shading can be provided with respect to emphasize shading/inverse-shading of particular colors. For example, the shading/inverse-shading can be emphasized to filter out blue/ violet frequencies while allowing red/yellow frequencies. This can have the effect that the user 101 can have the background and object presented in a less harsh or less bright light, and can have the effect that the user is more able to see in a mixed rods and cones format, or in a rods- only format, for greater precision of viewing the object with adequate visual acuity.

Viewing objects with acoustic recognition

[150] Fig. 2B shows a conceptual drawing of an example user viewing an object in response to acoustic recognition of the object.

[151] In one embodiment, the digital eyewear 110 can assist the user 101 in viewing, or listening to, an object 211, such as a person asking a question, such as in response to acoustic or visual recognition of that object (or person). For example, the user 101 can be making a presentation to an audience 221. In response to a question from the audience 221, the digital eyewear 110 can perform acoustic recognition of the individual person 222 asking the question, can determine a location of the individual person, and can perform audio/video shading/inverse-shading of the person asking the question, such as to to assist the user 101 in viewing, or listening to, that individual person. This can have the effect of providing the presenter with more audio/ video acuity with respect to the person asking the question.

[152] In one embodiment, the digital eyewear 110 can be coupled to one or more acoustic receivers 223. When the individual person 222 asks a question, such as when the user 101 takes questions from the audience 221, the acoustic receivers 223 can determine a location of the individual person. The digital eyewear 110 can be disposed to receive that location from the acoustic receivers 223, or can be disposed to determine that location in response to data from the acoustic receivers themselves.

[153] Alternatively, the acoustic receivers 223, in combination with video receivers (not shown) can determine a location and identification of the individual person. The identification can assist with determining the location of the individual person, or the identification can assist with determining an audio manipulation of the individual person’s voice so as to improve the user’s audio acuity with respect to that individual person.

[154] When the digital eyewear 110 determines the location of the individual person 222, the digital eyewear can identify the individual person 222 to the user 101. For example, the digital eyewear 110 can perform one or more of:

— inverse -shading (or otherwise highlighting) the individual person 222 in the user’s field of view 103;

— triggering an AR (augmented reality) view, in which the individual person 222 is highlighted or otherwise identified;

— triggering a light source 224 coupled thereto, and focusing that light source on the individual person 222; or otherwise as described herein.

[155] In one embodiment, the acoustic receivers 223 can include one or more of:

— microphones or directional microphones disposed near the user 101, such as on stage when making a presentation; — microphones or directional microphones dispersed within the audience 221, so as to provide one or more acoustic receivers 223 near a location from which the individual person 222 speaks;

— a mobile device, such as a microphone, disposed to be lent to the individual person 222, and including a GPS or other location device, so as to identify from where the individual person 222 speaks; or otherwise as described herein.

Example adjustment to sensory inputs

[156] Fig. 3 (collectively including fig. 3A-3B) shows a conceptual drawing of example adjustments to sensory inputs. Fig. 3A shows a conceptual drawing of an example signal coupled to a shading/ inverse-shading control with respect to luminance or loudness. Fig. 3B shows a conceptual drawing of an example signal coupled to a control with respect to differing frequencies.

Example shading/ inverse-shading signal

[157] Fig. 3A shows a conceptual drawing of an example signal disposed to be coupled to a shading/ inverse -shading control with respect to luminance or loudness.

[158] A graph 310 shows a representation of an example control signal that digital eyewear 110 can couple to a shading/ inverse- shading control, such as disposed to determine an amount of shading/ inverse -shading to be performed by the digital eyewear. The graph 310 includes an X-axis 311, representing time, a Y-axis 312, representing an amount of shading/inverse-shading, and a plot 313 representing the example signal.

[159] In one embodiment, the example control signal, as represented by the plot 313, can control the digital eyewear 110 to provide a time-varying amount of shading/inverse-shading. The time-varying signal can be substantially periodic, and can include a sequence of first time durations during which the digital eyewear 110 substantially refrains from shading/inverse-shading, and a sequence of second time durations during which the digital eyewear 110 substantially performs shading/inverse-shading. [160] During the first time durations, the control signal can direct the digital eyewear 110 to allow external sensory inputs to reach the user’s eyes, thus allowing the user to see external objects. In contrast, during the second time durations, the signal can direct the digital eyewear 110 to shade /inverse -shade external sensory inputs, thus preventing the user from seeing background glare, audio/video noise, or other sensory or cognitive overload, so as to improve the user’s visual acuity with respect to the object.

[161] When the user 101 is viewing a moving object 211 against a bright or visually noisy background, the digital eyewear 110 can shade/ inverse-shade external sensory inputs. This can have the effect that the user 101 can see the moving object with substantially lesser visual glare or noise, and allow the user to view the moving object without sensory or cognitive overload, so as to improve the user’s visual acuity with respect to the object.

[162] In one embodiment, the control signal can direct the digital eyewear 110 to show the moving object 211 for relatively short times during the sequence of first durations, and shades/ inverse-shades external sensory inputs against being shown during the sequence of second durations. This can have the effect that the moving object 211 appears to the user 101 in a view having a strobe-like effect, thus, a sequence of still images (or a sequence of short real time moving images) rather than an uninterrupted image of continuous motion. This can allow the digital eyewear 110 to reduce the amount of background luminance or visual noise, such as by not presenting that background to the user 101 during the sequence of second durations.

[163] This can have the effect that the user 101 can see the moving object 211 proceeding in its path with a strobe-like effect, while decreasing the possible effect of background luminance or visual noise. This allows the user 101 to follow the progress of the moving object 211 without the user’s view being debilitated by sensory or cognitive overload, so as to improve the user’s visual acuity with respect to the object.

[164] For example, when the moving object 211 is a ball (such as a baseball, basketball, football, golf ball, soccer ball, or otherwise), hockey puck, or otherwise, the possibility of sensory or cognitive overload from background luminance or visual noise can be substantially ameliorated. This can have the effect that the user 101 is afforded the ability to see the moving object 211 even when substantial background luminance or visual noise is present. In such cases, the background luminance or visual noise can be removed from the user’s view while still allowing the user 101 to follow the progress of the moving object 211.

[165] While this Application describes one possible control signal with respect to the digital eyewear 110 shading/ inverse-shading external sensory inputs, in the context of the invention, there is no particular requirement for any such limitation. There are many possible alternatives that are within the scope and spirit of the invention. The control signal can vary substantially, in response to changes in external sensory inputs, in response to ambient lighting conditions, in response to user inputs, in response to object recognition, in response to an accelerometer or other information with respect to a condition of the digital eyewear 110 itself, in response to user parameters (such as whether the user 101 is tired or ill), or otherwise.

[166] The control signal can have a different amount of shading/inverse-shading, a different period, or a different fraction of time the image is shown, than the examples directly described herein. For example, the control signal can present the same baseball to the user 101 in short real time moving images that are longer or shorter than the example given (1 millisecond) and have a longer or shorter period than the example (10 milliseconds apart). For another example, the control signal can present the same baseball with less than 100% amount of shading/inverse-shading, or can present the same baseball with more shading/inverse-shading at some times and less shading/inverse- shading at other times.

[167] The control signal can have a different shape than the examples directly described herein. For example, instead of a sharp rise time or fall time, shown in the plot 313 as substantially instantaneous, the control signal can take more time to “fade in” or “fade out” the shading/inverse-shading. The control signal can fade in/ out the sequence of still images (or sequence of short real time moving images). Thus, the shape of the control signal can have a triangular shape or a trapezoidal shape as viewed as a plot 313 of shading/ inverse- shading versus time. For another example, the control signal can fade in / out continuously; thus, the control signal can take the shape of a sine wave or another selected shape. The control signal need not even be periodic; it can have a random component with respect to its duration, fraction of shading/ inverse -shading time, fade in/ out time, or otherwise.

[168] The control signal can have its period, its fraction of shading /inverse -shading time, or its shape, altered in response to changes in external sensory inputs. For example, the digital eyewear 110 can change the period of the control signal to show still images (or short real time moving images) at a different rate when an object of interest, such as a baseball, moves more quickly, approaches the user 101, or changes its relationship to the lighting source. In such cases, the digital eyewear 110 can (A) show the baseball more frequently when it is closer to the user 101 or when it is moving more quickly, (B) show the baseball more frequently but for shorter times when it is subject to glare or excessive backlighting, (C) show the baseball less frequently but for longer times when it is subject to visual background noise, or (D) make other changes to the presentation of objects in response to changes in external sensory inputs, in each case so as to improve the user’s visual acuity with respect to the baseball.

Example multiple shading/ inverse-shading signals

[169] Fig. 3B shows a conceptual drawing of an example set of multiple signals disposed to be coupled to a control with respect to differing frequencies.

[170] A graph 320 shows a representation of an example signal that digital eyewear 110 can couple to a shading/ inverse -shading control, such as disposed to determine an amount of shading/ inverse-shading to be performed by the digital eyewear. The graph 320 includes an X-axis 321, representing time, a set of Y-axes 322, each representing an amount of shading/ inverse-shading, and a set of plots 323a, 323b, and 323c, each representing one such example signal.

[171] Similar to the fig. 3A, each such example signal, a represented by its associated plot, 323a, 323b, and 323c, can control the digital eyewear 110 to provide a time- varying amount of shading/inverse-shading. Each such example signal can represent a signal for a portion of the external sensory input received by the digital eyewear 110 and possibly provided to the user 101.

[172] For example, each such example signal can represent a selected set of frequencies, such as red, green, and blue colors. Although the figure shows a selected set of plots 323a, 323b, and 323c, that do not overlap in time, the user’s eye and brain can integrate the selected frequencies. This can have the effect that the user 101 can view the moving object 211 in full color despite only one or two colors being presented at any selected time.

[173] Although the figure shows a selected set of plots 323a, 323b, and 323c, that represent control signals that do not overlap in time, in the context of the invention, there is no particular requirement for any such limitation. When each such example signal can represent a selected set of frequencies, it is possible that colors can overlap at selected times. Each color can be presented to the user 101 individually, pairs of colors can overlap, or all three colors can overlap, each at selected times.

[174] Although the figure shows a selected set of plots 323a, 323b, and 323c, that are described as representing control signals for distinct sets of frequencies, in the context of the invention, there is no particular requirement for any such limitation. For example, the selected sets of frequencies can overlap substantially. In such cases, one selected set of frequencies can represent a black/ white signal, and additional selected sets of frequencies can represent a red, green, and blue color signals. The red, green, and blue color signals can overlap; thus, the frequencies shaded/inverse-shaded with respect to the red and green, or green and blue, can include selected frequencies that are common to both.

[175] In one embodiment, the digital eyewear 110 can provide sets of frequencies to the user 101 using one or more filters to select only those frequencies. For example, the digital eyewear 110 can select only green frequencies to present with one or more elec- trochromatic filters tuned to those particular frequencies. Alternatively, the digital eyewear 110 can present only selected sets of frequencies using polarizing filters tuned to those particular frequencies.

[176] Although the figure shows a selected set of plots 323a, 323b, and 323c, representing control signals for distinct colors that are substantially identical except for phase differences, in the context of the invention, there is no particular requirement for any such limitation. For example, the control signals can treat different colors differently so as to increase / decrease the amount of one or more selected colors after processing the external sensory inputs. For example, the control signals can present 10% of the red, 10% of the blue, and 20% of the green, from an image, with the possible effect that the user’s view of green in the image is more detailed, or otherwise to provide the maximum information to the user 101 that they can cognitively process.

[177] Although the figure shows a selected set of plots 323a, 323b, and 323c, that are described as representing control signals for distinct colors, in the context of the invention, there is no particular requirement for any such limitation. For example, the selected plots 323a, 323b, and 323c, can represent control signals for other audio/video components to be presented to the user 101. In such cases, other audio/video components can include one or more of:

— video components of the user’s field of view 103 other than color, such as (A) individual pixels, (B) particular objects, (C) broad light/dark regions, (D) relatively brighter/less-bright video components;

— video components of the user’s field of view when that field of view is altered by other equipment, such as when the user is viewing external sensory inputs using (A) binoculars; (B) camera lenses; (C) an infrared sight/ scope; (D) a microscope or telescope; (E) a rifle scope; (F) medical equipment associated with optometrists, ophthalmologists, or other medical personnel; (G) contact lenses including color, stippling, stripes, or other ocular effects;

— video components of the user’s field of view 103 having different sensory load on the user’s ability to receive that information, such as (A) a difference between a near field of view and a far field of view, particularly when the user is nearsighted or farsighted; (B) a difference between an object pertinent to the user’s activity and an object not pertinent, such as road signs when the user is driving, terrain when the user is piloting an aircraft, sports equipment when the user is participating in a sport; (C) objects likely to blur when the user is moving, such as objects not in line of sight with the user’s motion;

— video components of the user’s field of view 103 having different cognitive load on the user’s ability to process that information, such as (A) a difference between frontal vision and side vision; (B) a difference between attentive vision and peripheral vision; (C) a difference between viewing moving objects and stationary objects; (D) a difference between viewing symmetrical and asymmetrical objects;

— audio components available to the user’s hearing, such as (A) relatively higher/lower audio frequencies, (B) relatively louder/ softer audio components;

— audio components related to the user’s activity, such as (A) singing or speaking voices or musical instruments when the user is attending an opera or play, (B) special effects or vehicle noises when the user is watching a movie or television; (C) traffic signals, engine noises, brakes or horns, when the user is standing or walking in traffic, such as when the user is a traffic officer;

— audio/ video components related to medical conditions impacting the user, such as (A) when the user is under stress or tension; (B) when the user is under the influence of recreational medicine, such as alcohol or cannabis; (C) when the user is subject to a brain trauma, a cardiac event, a concussion, exhaustion, or a stroke; (D) when the user is subject to strong emotion, such as depression or mania; or otherwise.

Example inputs including cognitive overload

[178] Fig. 4 (collectively including fig. 4A-4C) shows a conceptual drawing of example sensory inputs including possible cognitive overload. Fig. 4A shows a conceptual drawing of an example system involving sudden excessive luminance or loudness. Fig. 4B shows a conceptual drawing of an example system involving a side-channel warning of surprising sensory inputs. Fig. 4C shows a conceptual drawing of an example representation of relatively rapid response to sudden excessive luminance or loudness.

Sudden excessive luminance or loudness [179] Fig. 4A shows a conceptual drawing of an example system involving sudden excessive luminance or loudness.

[180] In one embodiment, a user 101 can be driving a vehicle 411 (such as an automobile) or otherwise moving with respect to a light source 412. For example, the user 101 can enter or exit a relatively dark tunnel 413, such as at an entrance 413a or an exit 413b thereof.

[181] In such cases, when the vehicle 411 passes through the entrance 413a and enters the tunnel 413, the user’s vision can undergo a substantial sensory underload due to the light source 412 being blocked. The user 101 can experience a possibly brief, but nonetheless substantial, time during which the user’s vision will be substantially impaired. This can have the effect that the user’s control of the vehicle 411 can be hindered, for at least some time after the user 101 enters the tunnel 413. This can be dangerous, particularly when the user 101 is driving at a rapid pace, such as when racing.

[182] Similarly, when the vehicle 411 passes through the exit 413b and exits the tunnel 413, the user’s vision can undergo a substantial sensory overload due to the light source 412 becoming unblocked. The user 101 can experience a time during which excessive brightness or glare will cause the user’s vision to be substantially impaired. This, too, can have the effect that the user’s control of the vehicle 411 can be dangerously hindered, for at least some time after the user 101 exits the tunnel 413. Similar to entering the tunnel 413, this can be dangerous, particularly when the user 101 is driving at a rapid pace, such as when racing.

[183] Other cases of sensory or cognitive overload/ underload can occur in response to

— sudden exposure/de-exposure of the sun, such as in response to movement of clouds in front of or away from a line of sight between the user 101 and the sun;

— sudden exposure / de-exposure to glare or other reflective brightness, such as in response to movement of a reflective surface (such as water, glass, or metal) into or out of a line allowing reflection of the light source 412 into the user’s eyes; — sudden loud sounds, such as in response to a collision or explosion, an automobile engine backfire, a gunshot, other loud sounds, or otherwise;

— sudden background noise reducing the clarity of softer sounds, such as when listening to another person talking in a possibly noisy environment, such as when subject to ambient noise; or other rapid onset of excessive brightness, glare, or loudness, or otherwise.

Side-channel warning of surprising sensory inputs

[184] Fig. 4B shows a conceptual drawing of an example system involving a sidechannel warning of surprising sensory inputs.

[185] In one embodiment, sensory or cognitive overload can be deliberately induced, such as when used by a protected person 423, such as law enforcement or military personnel, to degrade the ability of an unprotected person 425. One such device sometimes used for such purposes includes “flashbang grenades”, which generate excessive light and sound without explosive damage, so as to temporarily blind or deafen the unprotected person 425. In such cases, the protected person 423 typically desires to use a flashbang grenade against an unprotected person 425 without themselves being subject to the effects thereof.

[186] In one embodiment, a device 420, such as a flashbang grenade, can include a transmitter 421 disposed to emit a warning signal 422, such as an RF (radiofrequency) or other electromagnetic signal. The protected person 423, such as law enforcement or military personnel, can be disposed with digital eyewear 110 that receive the warning signal 422. When the digital eyewear 110 receives the warning signal 422, the digital eyewear 110 triggers audio /video shading to protect the protected person 423 against sensory or cognitive overload otherwise deliberately induced by the flashbang grenade 420. As further described herein with respect to fig. 4C, the digital eyewear 110 can be disposed to trigger audio/video shading sufficiently rapidly that the warning signal 422 need be emitted only about a few milliseconds before the flashbang grenade causes its intended sensory overload due to excessive light and sound. [187] For example, the digital eyewear 110 can include a receiver 424 disposed to receive the warning signal 422. The receiver 424 can be coupled to the computing device 120, which can be disposed to trigger audio/video shading to protect the protected person 423, while there is no such audio/video shading to protect the unprotected person 425. For example, the audio/video shading can include a polarizing filter (for video shading) and sound-dampening headphones (for audio shading) coupled to the computing device 120, so as to protect the protected person 423. In one embodiment, the digital eyewear 110, computing device 120, and audio/video shading can be disposed within an integrated headset disposed to protect the wearer 423. In contrast, the unprotected person 425 has no such audio/video shading.

[188] In one embodiment, the digital eyewear 110 can include a first lens 111 and a second lens 111 coupled to the computing device 120, so as to prevent sensory overload imposed on the protected person 423. In such cases, when the digital eyewear 110 detects the warning signal 422, the digital eyewear’s computing device 120 can intercept external sensory inputs at the first lens 111 , so as to provide monitoring and delay of sensory inputs. The monitoring and delay of sensory inputs can prevent sensory overload imposed on the protected person 423.

[189] The computing device 120 can process the external sensory inputs received at the first lens 111, remove excessive light and sound that can otherwise cause sensory or cognitive overload, and provide processed inputs to the protected person 423 using the second lens 111. This can have the effect that the digital eyewear 110 provides audio/video shading in response to the warning signal 422, while the unprotected person 425 has no such audio/video shading.

[190] In one embodiment, the device 420, can include an explosive, such as a shaped-charge explosive or another explosive disposed to operate with respect to a particular object. For example, the explosive can be disposed to operate with respect to a door or door-frame, so as to remove the door and its door-frame as an obstacle to law enforcement officers attempting to enter. In such cases, the digital eyewear 110 can process the sensory inputs received at the first lens 111 and remove the excessive audio/video caused by the explosive. The digital eyewear 110 can also process the sensory inputs received at the first lens 111 and remove the door or door-frame themselves from the image seen by the law enforcement officers, so as to allow the law enforcement officers to see the gap made by the explosive while results of the explosion are clearing. In such cases, the explosive need not be a flashbang-type explosive, merely a device 420 disposed to remove (or weaken) an obstacle to entry. Similar devices 420 can be used by military personnel or search/ rescue personnel.

[191] While this Application shows particular techniques for warning the digital eyewear 110 about use of a flashbang grenade 420 (or other device designed to induce deliberate sensory or cognitive overload), in the context of the invention, there is no particular requirement for any such limitation. For example, other techniques for identifying the effects of a flashbang grenade 420, or similar device, would be workable, and are within the scope and spirit of the invention. Similarly, while this Application shows particular techniques for providing audio /video shading in response to the warning signal 422, in the context of the invention, there is no particular requirement for any such limitation. For example, other techniques for mitigating the effects of a flashbang grenade 420, or similar device, would be workable, and are within the scope and spirit of the invention.

Rapid response to sudden excessive luminance or loudness

[192] Fig. 4C shows a conceptual drawing of an example representation of relatively rapid response to sudden excessive luminance or loudness.

[193] A graph 430 shows a representation of a set of example signals representing onset of excessive luminance or loudness. The graph 430 includes an X-axis 431, representing time. The graph 430 also includes a first Y-axis 432a and a first time-varying plot 433a, representing an amount of luminance or loudness, a second Y-axis 432b and a second time-varying plot 433b, representing an amount of shading/ inverse-shading, and a third Y-axis 432c and a third time-varying plot 433c, representing a user’s sensory response to the luminance or loudness. [194] In one example, an amount of luminance or loudness can exhibit relatively rapid onset, such as when the user 101 is subject to a sudden excessively bright light or loud sound. The first plot 433a shows that luminance or loudness can increase rapidly from a relative minimum to a relative maximum in a fraction of a second. For example, some sudden excessively bright lights or loud sounds can reach a relative maximum in only a few milliseconds.

[195] In such cases, the digital eyewear 110 can detect the onset of excessive luminance or loudness, thus, sufficient to produce sensory or cognitive overload. The digital eyewear 110 can generate a signal, shown by the second plot 433b, representing an amount of shading/ inverse-shading provided in response to the excessive luminance or loudness. As shown in the second plot 433b, the control signal for shading/ inverse - shading, in response to the excessive luminance or loudness, can be provided in only a few milliseconds, thus, faster than the rise time of the sudden excessively bright light or loud sound.

[196] When the digital eyewear 110 provides shading/ inverse-shading in response to the sudden excessively bright light or loud sound, there is a portion of the bright light or loud sound that is not shaded/inverse-shaded. That portion thus leaks through to the user’s eye despite efforts by the digital eyewear 110. As shown in the third plot 433c, the digital eyewear 110 can respond sufficiently rapidly that the amount of the bright light or loud sound that leaks through to the user’s eye is relatively small. This can have the effect that the user 101 is protected against sensory or cognitive overload, despite the excessively bright light or loud sound being intense, sudden, or both.

Example adjustment to sensory systems

[197] Fig. 5 shows a conceptual drawing of example adjustment of user sensory systems.

[198] As further described herein, the system 100 can perform adjustment of user sensory systems in addition to, or in lieu of, adjusting incoming external sensory inputs. For example, in addition to, or in lieu of, shading excessive luminance, the system 100 can prompt the user’s pupils to narrow; this can have the effect that the user’s eyes perform the function of reducing sensory or cognitive overload, rather than requiring the digital eyewear 110 to do so.

Inducing pupillary adjustment

[199] As further described herein, apparatus can be disposed to induce adjustment of user sensory systems, such as prompting adjustment of an opening of the user’s pupil, or otherwise to have the effects described herein, including one or more of:

— a first electronic element 511 disposed to be coupled to the user’s iris, pupil, or other portion of the user’s eye, or otherwise to have the effects described herein;

— a first signal 512 disposed to be coupled to that first electronic element, the first signal disposed to have an effect of prompting adjustment of an opening of the user’s pupil, or otherwise to have the effects described herein.

[200] The first electronic element 511 can be coupled to the user’s iris, pupil, or other portion of the user’s eye, or otherwise to have the effects described herein. For example, the first electronic element 511 can include a first conductive circuit element, such as a wire, disposed to be coupled to a portion of the user’s eye. In such cases, the portion of the user’s eye can be selected so as to prompt the user’s iris to widen or narrow in response to the first signal 512. The portion of the user’s eye can include an element of the eye capable of opening the user’s iris; this can have the effect that the user’s pupil can widen or narrow in response to the first signal 512. This can also have the effect that the user’s pupil can widen or narrow substantially faster when triggered by the first signal 512 than when triggered by muscle signals from the brain.

[201] For another example, the first electronic element 511 can include an electromagnetic transmitter, such as a Bluetooth™, RFID, or other RF (radio frequency) transmitter disposed to send the first signal 512, or a variant thereof, to a first electromagnetic receiver. The first electronic element 511 can also include the first electromagnetic receiver 51 lb, such as an RFID or other RF antenna coupled to a contact lens 111 and disposed to receive the first signal 512, or a variant thereof. In such cases, the first electromagnetic receiver can be coupled to a portion of the user’s eye so as to prompt the user’s iris to widen or narrow in response to the first signal 512; this can have the effect that the user’s pupil can widen or narrow in response to the first signal 512.

[202] As further described herein, the first electromagnetic receiver (or the first conductive circuit element) can be disposed at, on, or within, the contact lens 111, which can be disposed at or on a surface of the user’s eye. In such cases, when the first electromagnetic receiver (or the first conductive circuit element) receives the first signal 512, an electronic current can be coupled to the portion of the user’s eye so as to prompt the user’s iris to widen or narrow in response thereto. In one embodiment, the user’s iris can be prompted to widen or narrow in response to an electromagnetic signal applied to the user’s musculature controlling the iris, in response to an amount of pain applied to the user’s eye and prompting the user’s eye to adjust the iris, or otherwise as consistent with this Application.

[203] As further described herein, the system 100 can induce pupillary adjustment in response to changes, including sudden changes, in luminance directed at the user’s eye. For example, as further described herein with respect to fig. 4A, when the user is driving an automobile and enters or exits a substantially dark tunnel, the luminance directed at the user’s eye might be substantially reduced (upon entry) or increased (upon exit, particularly when exiting into direct sunlight). For example, this can have the effect of improving the user’s visual acuity upon entrance to or exit from the tunnel.

[204] The system 100 can generate and emit the first signal 512 to widen or narrow the user’s pupil, as appropriate, in response to changes, including sudden changes, to luminance directed at the user’s eye. For example, this can have the effect that when the user enters a substantially dark tunnel, the system 100 can prompt the user’s pupil to widen, so as to prompt rapid response to the relatively sudden darkness experienced by the user’s eyes; this can have the effect that the user is able to see clearly without the substantial delay ordinarily associated with relatively sudden darkness. For a similar example, this can have the effect that when the user exits a substantially dark tunnel, the system 100 can prompt the user’s pupil to narrow, so as to prompt rapid response to the relatively sudden brightness experienced by the user’s eyes; this can have the effect that the user is able to see clearly without the substantial delay ordinarily associated with relatively sudden brightness. This can be particularly important when the user is driving at a relatively fast speed (such as in a race) and when sunlight is angled directly at the user’s eyes (such as when the sun is relatively low in the sky and appears at the exit of the tunnel) .

[205] As further described herein, apparatus can be disposed to induce adjustment of user sensory systems, such as prompting adjustment of adjustment of the user’s gaze direction, or otherwise to have the effects described herein, including one or more of:

— a second electronic element 521 disposed to be coupled to the user’s eye muscles, sclera, other portion of the user’s eye, or otherwise to have the effects described herein;

— a second signal 522 disposed to be coupled to that second electronic element, the second signal disposed to have an effect of prompting adjustment of the user’s gaze direction, or otherwise to have the effects described herein.

[206] The second electronic element 521 can be coupled to the user’s eye muscles, sclera, or other portion of the user’s eye, or otherwise to have the effects described herein. For example, the second electronic element 521 can include a second conductive circuit element, such as a wire, coupleable to a portion of the user’s eye. In such cases, the portion of the user’s eye can be selected so as to prompt the user’s eye gaze to change to a different direction in response to the second signal 522. The portion of the user’s eye can include an element of the eye capable of altering the user’s eye gaze direction; this can have the effect that the user’s eye gaze can change to a different direction in response to the second signal 522. This can also have the effect that the user’s eye gaze can change to a different direction substantially faster when triggered by the first signal 522 than when triggered by muscle signals from the brain

[207] For another example, the second electronic element 521 can include an second electromagnetic transmitter, such as a Bluetooth™ or other RF (radio frequency) transmitter disposed to send the second signal 522, or a variant thereof, to an second electromagnetic receiver. The second electronic element 521 can also include the second electromagnetic receiver, such as an RF antenna coupled to the contact lens 111 and disposed to receive the second signal 522, or a variant thereof. In such cases, the second electromagnetic receiver can be coupled to a portion of the user’s eye so as to prompt the user’s eye gaze to change to a different direction in response to the second signal 522; this can have the effect that the user’s eye gaze can change to a different direction in response to the second signal 512.

[208] As further described herein, the second electromagnetic receiver (or the second conductive circuit element) can be disposed at, on, or within, a contact lens 111, which can be disposed at or on a surface of the user’s eye. In such cases, when the second electromagnetic receiver (or the second conductive circuit element) receives the second signal 522, an electronic current can be coupled to the portion of the user’s eye so as to prompt the user’s eye gaze to change direction in response thereto.

[209] As further described herein, the system 100 can induce gaze adjustment in response to changes, including sudden changes, in luminance directed at the user’s eye. For example, as further described herein with respect to fig. 2A, when the user is watching a moving target and encounters substantially greater backlighting or glare, the luminance directed at the user’s eye might be substantially increased (such as upon backlighting from the sun, or such as upon encountering reflective glare).

[210] The system 100 can generate and emit the second signal 522 to adjust the user’s gaze direction, as appropriate, in response to changes, including sudden changes, in luminance directed at the user’s eye. For example, the user can be subjected to sudden glare, in response to which the system 100 can prompt the user’s eye to look away from the location from which the glare is directed. For another example, the user can be looking at an object and have a sudden amount of backlighting or visual noise appear behind the object (or have the object move in front of a sudden amount of backlighting or visual noise), in response to which the system 100 can prompt the user’s eye to look away from the location from which the backlighting or visual noise is directed, such as toward a direction toward which the object is moving.

Example methods of use

[211] Fig. 6 (collectively including fig. 6A-6E) shows a conceptual drawing of an example method of using a digital eyewear system. Fig. 6A shows a conceptual drawing of an example method of using a digital eyewear system. Fig. 6B shows a conceptual drawing of an example method of using a digital eyewear system to adjust sensory inputs with respect to sensory or cognitive overload. Fig. 6C shows a conceptual drawing of an example method of using a digital eyewear system to adjust sensory inputs with respect to side-channel warning of surprises. Fig. 6D shows a conceptual drawing of an example method of using a digital eyewear system to adjust sensory inputs involving monitoring and delay of sensory inputs. Fig. 6E shows a conceptual drawing of an example method of using a digital eyewear system with respect to induced adjustment of user sensory systems.

[212] Fig. 6A shows a conceptual drawing of an example method of using a digital eyewear system.

[213] A method 600 includes flow points and method steps as shown in the figure, and as otherwise described herein, such as:

— a flow point 600A, in which the method 200 is ready to begin;

— a flow point 610, in which the method 200 is ready to adjust sensory inputs with respect to sensory or cognitive overload;

— a flow point 620, in which the method 200 is ready to adjust sensory inputs with respect to side-channel warning of surprises;

— a flow point 630, in which the method 200 is ready to adjust sensory inputs involving monitoring and delay of sensory inputs;

— a flow point 640, in which the method 600 is ready to induce adjustment of user sensory systems;

— a flow point 600B, in which the method 200 is ready to finish.

Beginning of method

[214] A flow point 600A indicates that the method 200 is ready to begin. [215] The method 600 can be triggered by one of more of the following:

— detecting circumstances in which sensory inputs are likely to lead to sensory or cognitive overload;

— detecting side-channel warning of surprises with respect to sensory inputs;

— detecting other factors with respect to sensory inputs, such as described herein; or otherwise.

[216] The method 600 can determine whether to adjust those incoming sensory inputs (A) using intermittent shading/inverse-shading, (B) in response to side-channel warning of surprises, (C) using monitoring and delay, or (D) to instead induce adjustment of user sensory systems.

[217] When the method 600 determines that it should adjust incoming sensory inputs using intermittent shading/inverse-shading is appropriate, the method 600 can proceed with the flow point 610A. When the method 600 returns from the corresponding flow point 610B, it can proceed with the flow point 600B.

[218] When the method 600 determines that it should adjust incoming sensory inputs in response to side-channel warning of surprises, the method 600 can proceed with the flow point 620A. When the method 600 returns from the corresponding flow point 620B, it can proceed with the flow point 600B.

[219] When the method 600 determines that it should adjust incoming sensory inputs using monitoring and delay, the method 600 can proceed with the flow point 630A. When the method 600 returns from the corresponding flow point 630B, it can proceed with the flow point 600B.

[220] When the method 600 determines that it should induce adjustment of user sensory systems, the method 600 can proceed with the flow point 640A. When the method 600 returns from the corresponding flow point 640B, it can proceed with the flow point 600B. End of method

[221] A flow point 600B indicates that the method 600 is ready to finish. The method 600 can finish operations and clean up after any ongoing operations.

[222] In one embodiment, the method 600 can be restarted as triggered by any technique described with respect to the flow point 600A.

Intermittent shading/ inverse-shading

[223] Fig. 6B shows a conceptual drawing of an example method of using a digital eyewear system to adjust sensory inputs with respect to sensory or cognitive overload.

[224] A flow point 610A indicates that the method 600 is ready to adjust sensory inputs with respect to sensory or cognitive overload.

[225] At a step 611, the method 600 can determine that a sensory or cognitive overload or underload, such as an excessive luminance or loudness (overload), or such as an inadequate luminance or loudness (underload), is occurring or about to occur. For example, a sudden increase in luminance or loudness can be identified by the method 600 as a likely source of sensory or cognitive overload; similarly, a sudden decrease in luminance or loudness can be identified by the method 600 as a likely source of sensory or cognitive underload.

[226] For example, a sensory or cognitive overload can occur when a floodlamp or other bright light is directed at the user’s eyes, when a flashbang grenade is triggered near the user, when a vehicle exits a dark tunnel into bright sunlight, when other sudden changes that increase luminance or loudness, or otherwise. Similarly, a sensory or cognitive underload can occur when a bright light is no longer directed at the user’s eyes, when a bright light or loud noise is no longer operating near the user, when a vehicle enters a dark tunnel from bright sunlight, when other sudden changes occur that decrease luminance or loudness, or otherwise. [227] When the method 600 determines that a sensory or cognitive overload /underload is occurring or is about to occur, the method can proceed with the next step. Otherwise, the method 600 can proceed with the flow point 61 OB.

[228] At a step 612, the method 600 can trigger the digital eyewear 110 to shade/in- verse-shade the lenses 111, or a portion thereof, or a selected group of pixels 114 thereof.

[229] At a step 613, the method 600 can determine that the sensory or cognitive overload or underload is substantially finished. For example, the sudden increase/ decrease in luminance or loudness can have abated. In such cases, the method 600 can determine whether the level of sensory or cognitive input has returned to a normal level, so as to not provide sensory or cognitive overload or underload, with the effect of improving the user’s visual acuity.

[230] When the method 600 determines that the sensory or cognitive overload/un- derload is substantially finished, the method can proceed with the next step. Otherwise, the method 600 can proceed with the flow point 610B.

[231] At a step 613, the method 600 can trigger the digital eyewear 110 to no longer shade /inverse -shade the lenses 111, or the portion thereof it had earlier selected.

[232] The method can proceed with the flow point 610B.

[233] A flow point 610B indicates that the method 600 is ready to return to the end of the main method.

Side-channel warning of surprises

[234] Fig. 6C shows a conceptual drawing of an example method of using a digital eyewear system to adjust sensory inputs with respect to side-channel warning of surprises. [235] A flow point 620 indicates that the method 600 is ready to adjust sensory inputs with respect to side-channel warning of surprises.

[236] At a step 621, a device 420 (such as a flashbang grenade described with respect to fig. 4B, artillery piece, other excessively bright or loud equipment, or otherwise) that is likely to provide excessive luminance /loudness as an external sensory input to the user’s eyes/ ears can generate a warning signal 422 in advance of the device’s activity. For example, a flashbang grenade can include such a device 420, and can generate the warning signal 422 in advance of detonating. For example, the device 420 can generate the warning signal 422 a few milliseconds before detonating. This can have the effect that the warning signal 422 provides the system 100 with advance warning that the flashbang grenade 420 is about to detonate. [236] At a step 622, the method 600 can determine that a side-channel warning of surprise has been received.

[237] For example, when a flashbang grenade 420 emits a warning signal 422 just before being triggered, the method 600 can determine that the warning signal 422 (from the flashbang grenade) was emitted. Accordingly, the method 600 can determine that the flashbang grenade 420 is about to detonate, and that excessive luminance/ loudness is about to occur. Similarly, an artillery piece (not shown) can include a device that emits an electromagnetic or other warning signal just before being triggered, in which case the method 600 can determine that the warning signal (from the artillery piece) has been emitted, and that excessive loudness is about to occur.

[238] As part of this step, when the warning signal 422 has been encrypted/ obfuscated before transmission, the digital eyewear 110 can de-encrypt/de-obfuscate the warning signal 422 with an appropriate de-en cryption/ de-obfuscation code. For example, the flashbang grenade 420 can include an encryption/ obfuscation element (not shown) that can encrypt/ obfuscate the warning signal 422 before transmission. In such cases, only those instances of digital eyewear 110 having the appropriate de-encryp- tion/de-obfuscation code would be able to de-encrypt/de-obfuscate the warning signal 422. [239] In such cases, the digital eyewear 110 can protect users against luminance and loudness from the flashbang grenade 420, while still allowing for full effectiveness against persons using digital eyewear 110 who are not supplied with the appropriate de-encryption/de-obfuscation code. This can have the effect that users such as law enforcement personnel can use digital eyewear 110 for protection against excessive luminance /loudness without the possibility that users with unauthorized digital eyewear 110 are also protected.

[240] At a step 623, in response to the warning signal 422 with respect to sensory or cognitive overload /underload, the method 600 can trigger the system 100 to shade/in- verse-shade excess/ inadequate luminance /loudness due to external sensory inputs.

For example, in response to the warning signal 422, the system 100 can rapidly shade the lenses 111 before detonation (such as by rapidly triggering polarization of the lenses 111, or of individual or groups of pixels 114 thereof), to limit excessive luminance input to the user’s eyes. This can have the effect that upon detonation, the lenses 111, or a portion thereof, can already be shaded against excessive luminance as an external sensory input to the user’s eyes. In such cases, the digital eyewear 110 can protect users, such as law enforcement or military personnel, against excessive luminance /loudness from the flashbang grenade 420, while still providing full effectiveness against persons not using digital eyewear 110.

[241] For another example, other side-channel warnings of surprises can include exits and entrances to tunnels. This can have the effect that users 101 who drive into or out of those tunnels need not rely on rapid determination of sensory or cognitive overload or underload. Instead, their digital eyewear 110 can receive a warning signal 422, such as provided by a warning device (not shown) disposed near an entrance or exit of the tunnel, so that drivers and their digital eyewear 110 can be warned of upcoming sensory or cognitive underload/ overload due to entrance/ exiting the tunnel.

[242] The method can proceed with the flow point 620B. [243] A flow point 620B indicates that the method 600 is ready to return to the end of the main method.

Monitoring and delay of sensory inputs

[244] Fig. 6D shows a conceptual drawing of an example method of using a digital eyewear system to adjust sensory inputs involving monitoring and delay of sensory inputs.

[245] A flow point 630A indicates that the method 600 is ready to adjust sensory inputs involving monitoring and delay of sensory inputs.

[246] At a step 631, the method 600 can determine that a sensory or cognitive overload or underload is occurring or about to occur. The digital eyewear 110 can itself determine that the sensory or cognitive overload or underload is occurring or about to occur, or the digital eyewear 110 can receive a side-channel warning of surprise.

[247] At a step 632, the method 600 can receive the external sensory input at a sensory input element of the digital eyewear 110. For example, the sensory input element can include a first layer 11 la or a first lens 11 la of a multi-layer lens 111 of the digital eyewear 110. The first layer I l la or the first lens I l la of the multi-layer lens 111 can be disposed to receive the external sensory input before it is received by the user 101.

[248] At a step 633, the method 600 can process the external sensory input, such as using the digital eyewear’s computing device 130. For example, the digital eyewear’s computing device 130 can shade /inverse- shade the external sensory input. This can have the effect that the external sensory input can be reduced in luminosity or loudness (in the case of sensory or cognitive overload) or increased in luminosity or loudness (in the case of sensory or cognitive underload) .

[249] At a step 634, the method 600 can provide the processed external sensory input to a sensory output element of the digital eyewear 110. For example, the sensory output element can include a second layer 11 lb or a second lens 111b of a multi-layer lens 111 of the digital eyewear 110. The second layer 111b or the second lens 111b of the multi-layer lens 111 can be disposed to provide the external sensory input to the user 101.

[250] The method can proceed with the flow point 630B.

[251] A flow point 630B indicates that the method 600 is ready to return to the end of the main method.

Induced adjustment of user sensory systems

[252] Fig. 6E shows a conceptual drawing of an example method of using a digital eyewear system with respect to induced adjustment of user sensory systems.

[253] A flow point 640A indicates that the method 600 is ready to induce adjustment of user sensory systems.

[254] As further described herein, the method 600 can induce adjustment of user sensory systems, such as using apparatus including one or more of:

— the first electronic element 511 disposed to be coupled to the user’s iris, pupil, other portion of the user’s eye, or otherwise to have the effects described herein;

— the first signal 512 disposed to be coupled to that first electronic element;

— the second electronic element 521 disposed to be coupled to the user’s eye muscles, sclera, other portion of the user’s eye, or otherwise to have the effects described herein;

— the second signal 522 disposed to be coupled to that second electronic element.

[255] At a step 641, the method 600 can determine an adjustment to induce with respect to user sensory systems. For example, the adjustment can include an adjustment to an opening of the user’s pupil. For another example, the adjustment can include a change to the user’s gaze direction. In other examples, the adjustment can include a change to another feature of the user’s vision, such as using the user’s eye muscles, optic nerve, or other elements of the user’s vision system.

[256] As part of this step, when the method 600 can determine that the adjustment should be with respect to an opening of the user’s pupil, the method can continue with the step 642a.

[257] As part of this step, when the method 600 can determine that the adjustment should be with respect to an opening of the user’s pupil, the method can continue with the step 643a.

[258] At a step 642a, the method 600 can generate the first signal 512 described herein, such as at or from the computing device 120, to be coupled to the user’s iris, other portion of the user’s eye, or otherwise to have the effect described herein.

[259] At a step 642b, the method 600 can send the first signal 512, such as from the computing device 120, to the first electronic element 511 (coupled to the user’s iris, pupil, other portion of the user’s eye, or otherwise to have the effects described herein).

[260] At a step 642c, the first electronic element 511 can receive the first signal 512 to be coupled to the user’s iris. This can have the effect that the first signal 512 is coupled to the user’s iris. The first signal 512 can prompt the user’s iris to contract or expand, depending on the selected particular signal. For example, the first signal 512 can prompt the user’s iris to contract, so as to reduce the effect of excessive luminance on the user’s eye. For another example, the first signal 512 can prompt the user’s iris to expand, so as to reduce the effect of inadequate luminance on the user’s eye.

[261] Alternatively, the first signal 512 can be coupled to a different technique for prompting the user’s iris to open or close relative to its current degree of openness. For example, the first signal 512 can be coupled to a shading/ inverse-shading element 513 that obscures at least a portion of the user’s pupil. The shading/ inverse -shading element 513 can be disposed to allow more/ less light into the user’s pupil, prompting the user’s iris to open /close in response thereto. [262] Thereafter, the method 600 can continue with the flow point 640B.

[263] At a step 643a, the method 600 can generate a second signal 522 described herein, such as at or from the computing device 120, to be coupled to the user’s eye muscles, sclera, other portion of the user’s eye, or otherwise to have the effect described herein.

[264] At a step 643b, the method 600 can send the second signal 522, such as from the computing device 120, to the electronic element 521.

[265] At a step 643c, the method 600 can send the second signal 522, such as from the computing device 120, to the second electronic element 521 (coupled to the user’s eye muscles, sclera, other portion of the user’s eye, or otherwise to have the effects described herein).

[266] In an alternative, the second signal 522 can be coupled to a different technique for prompting the user’s eye to alter its gaze direction relative to its current direction. For example, the second signal 522 can be coupled to a shading/ inverse-shading element 523 that obscures at least a portion of the user’s pupil. The shading/inverse- shading element 513 can be disposed to allow more/less light into the user’s pupil from one or more selected directions, prompting the user’s eye to alter its gaze in the selected direction in response thereto.

[267] In another alternative, the second signal 522 can be coupled to another different technique for prompting the user’s eye to alter its gaze direction relative to its current direction. For example, the second signal 522 can be coupled to an audio input that provides a voice or other audio input to the user, that informs the user that they should change their gaze direction toward a selected direction. The voice or other audio input can be disposed to inform the user of a desired gaze direction, and to reward the user when the user alters their gaze toward that direction, prompting the user’s eye to alter its gaze in the selected direction in response thereto. [268] Thereafter, the method 600 can continue with the flow point 640B.

[269] A flow point 640B indicates that the method 600 is ready to return to the end of the main method.

Example additional applications

[270] Fig. 7 shows a conceptual drawing of some example additional applications and embodiments.

[271] In one embodiment, a system 700 can include one or more devices disposed to process a visual or audio image, such as:

— a camera 711 or other imaging sensor, disposed to receive a visual image;

— a microphone 712 or other audio sensor, disposed to receive an audio signal.

[272] In such cases, the camera 711 or microphone 712 can be disposed to receive an input and present that input to a human eye or ear, or to a non-human sensor, such as a device disposed to process visual or audio images. Among other advantages, this can have the effect that the audio/ video images provided to a user can improve the user’s audio / visual acuity.

[273] In one embodiment, the camera 711 (or other imaging sensor) can itself include a non-human optical sensor, such as a sensor other than a human eye. For example, the non-human optical sensor can include any image sensor, such as a camera, a CMOS sensor, an image sensor, or otherwise. Similarly, the microphone 712 can itself include a non-human audio sensor, such as a sensor other than a human ear. For example, the non-human audio sensor can include any signal processing system disposed to receive audio input.

[274] In another embodiment, the system 700 including the camera 711 (or other image sensor) can include a first device 721 disposed to enhance or adjust an image on its way to a human eye or the camera 711 (or other imaging sensor). — For example, the device 721 can include binoculars, a microscope, a telescope, or other scope disposed to receive an image (whether optical or audio) and enhance or otherwise modify that image on its way to an image sensor. In such cases, binoculars, microscopes, and telescopes, can adjust the perceived size of the image when perceived by the image sensor.

— For another example, a filter, such as an ultraviolet (UV) filter, a color filter, or otherwise, can adjust a color balance of an image when perceived by the image sensor.

— For another example, a polarizing filter, a prismatic filter, or otherwise, can adjust aspects of an image when perceived by the image sensor. Similarly, an equalizer, or otherwise, can adjust aspects of an audio signal when perceived by the audio sensor.

[275] In another embodiment, the system 700 including the microphone 712 (or other audio sensor) can include a first device 722 disposed to enhance or adjust an image on its way to a human ear or the microphone 712 (or other audio sensor).

— For example, the microphone 712 can be coupled to an amplifier, an equalizer, or audio equipment disposed to receive an audio signal and enhance or otherwise modify that audio signal on its way to an audio sensor. In such cases, amplifiers or equalizers can adjust the perceived volume or audio balance of the image when perceived by the audio sensor.

[276] In another embodiment, the system 700 including the camera 711 (or other image sensor) can alternatively include a second device 731 disposed to receive an image for processing or transmission.

— For example, the second device 731 can include a television (TV) camera and optionally a TV transmission system, whether broadcast or closed circuit, and whether analog or digital. Similarly, the device 731 can include a personal video camera, a smartphone camera or other mobile device camera, or otherwise.

— For another example, the second device 731 can include medical equipment disposed to receive an image from a human eye (such as an image of the wearer’s eye, the wearer’s lens, or the wearer’s retina). The second device 731 can include other medical equipment such as might be used by an optometrist or ophthalmologist. [277] In another embodiment, the system 700 including the microphone 712 (or other audio sensor) can alternatively include a second device 732 disposed to receive an audio signal for processing or transmission.

— For example, the second device 732 can include digital audio equipment for mixing audio signals, “autotune” of audio signals, or other audio equipment such as might be used by an audiophile or a professional sound mixer.

— For another example, the second device 732 can include medical equipment disposed to receive an audio signal in an ultrasonic range, such as an ultrasonic sensor, an ultrasonic imaging system for use in imaging internal body structures, or otherwise.

[278] In another embodiment, the system 700 can include a remote device 741, disposed remotely from the eyewear carried by the wearer.

— For example, the remote device 741 can include a database or server disposed to receive requests and provide responses to the eyewear. For example, the remote device

741 can be disposed to maintain image data.

[279] In another embodiment, the system 700 can include a remote device 742, disposed remotely from an audio device carried by the wearer.

— For example, the remote device 742 can include a database or server disposed to receive requests and provide responses to the eyewear. For example, the remote device

742 can be disposed to maintain audio data.

[280] In another embodiment, the system 700 can include a remote device 741 or 742, disposed remotely from a wearable device but within the user’s capability to influence.

— For example, the remote device 741 or 742 can include a smartphone or other mobile device, or a wearable or implantable device.

— For another example, the remote device 741 or 742 can include a remotely mounted video or audio sensor, such as remotely mounted at a selected location, or remotely mounted on a moving platform, such as a vehicle or a drone.

Use in specific activities [281] Fig. 8 (collectively including fig. 8A-8D) shows a conceptual drawing of an example use of a digital eyewear system.

[282] Digital eyewear can also be disposed to provide the user with the ability to receive sensory inputs and process them cognitively while participating in activities in which visual acuity is valuable to the viewer, such as:

— (A) operating a flying vehicle, such as an aircraft, an ultralight aircraft, a glider, a hang-glider, a helicopter, or a similar vehicle;

— (B) operating a ground vehicle, such as an automobile, a race car, or a similar vehicle;

— (C) operating a water vehicle, such as a kayak, motorboat, sailboat or yacht, speedboat, a cigarette boat, or a similar vehicle;

— (D) operating a motorcycle, a dirt bike, a bicycle, a unicycle, or a similar vehicle;

— (E) participating in a sport using relatively rapid sports equipment, such as baseball, basketball, an equestrian sport (such as dressage or horse racing), football, field hockey, ice hockey, jai alai, lacrosse, a snow sport (such as skiing, sledding, snowboarding, operating a snowmobile, or tobogganing or luge), soccer, or a similar sport;

— (F) participating in an activity in which shooting might occur, such as hunting, “laser tag”, skeet shooting (or otherwise shooting at a moving target), target shooting, or a similar activity;

— (G) participating in an activity in which using a sight (whether stereoscopic or not) might occur, such as using binoculars, using a rifle sight, or photography (whether still photography or motion-picture photography);

— (H) participating in an activity in which tracking moving equipment, such as viewing rotating turbines or wheels, or for which it is useful to tune a viewing frequency to a frequency of angular position or movement, so as to operate in synchrony therewith;

— (I) participating in an activity in which critical, such as life -critical, decisions are made, such as performing as an emergency responder, emergency room personnel, a law enforcement officer, military personnel, or a similar activity;

— or otherwise as further described herein.

[283] As described herein, these specific activities can involve circumstances in which the user would gain substantially from enhanced audio or visual acuity. Enhanced audio /video acuity can help the user in circumstances in which the user would find it valuable to view one or more of:

— (A) objects that are in relatively rapid motion with respect to the user, or are otherwise difficult to see when the user is looking directly at them;

— (B) objects that are primarily viewable using the user’s peripheral vision, or other portions of the user’s vision that have a lesser degree of natural acuity;

— (C) objects that involve the user’s immediate or otherwise rapid reaction thereto, such as sports equipment (such as baseballs or tennis balls), terrain (such as road tracks or other vehicles), user equipment by other persons (such as whether a device in a person’s hand is a cell phone or a handgun);

— (D) objects that are in motion with respect to the user, such as objects that are moving directly toward or away from the user, objects that are moving in a region of the user’s peripheral vision;

— (E) objects that are located poorly for viewing with respect to a background, such as objects that are brightly backlit, or for which the sun or other lighting is in the user’s eyes, or which appear before a visually noisy background, or otherwise are difficult to distinguish; or otherwise as described herein.

[284] As described herein, the digital eyewear can improve the user’s audio and/or visual acuity, or improve the user’s ability to see motion, in these specific activities or in these circumstances, without degrading the user’s normal ability to sense audio and/or visual information, and without interfering with the user’s normal activity. In one embodiment, the digital eyewear can operate at a relatively high frequency relative to object motion, such as about 80-150 Hz, or possibly somewhat more or less, such as over about 25 Hz. However, there is no particular requirement for any such limitation. The digital eyewear can operate at any frequency allowing the user to perform normally without degrading the user’s senses and without substantially sensory interference.

One Example Use

[285] Fig. 8A shows a conceptual drawing of an example use of a digital eyewear system in a sport scenario. A user 801, such as a person participating in or observing a sport, can be watching an object 802, such as a ball, while it is travelling a continuous path 803. The object 802 might be subject to backlighting 804 (or subject to a background view) , for which the backlighting (or background view) interferes with the user’s view of the object. A view path 805 between the user 801 and the object 802 might be disposed so that the backlighting interferes with the user’s view of the object. Digital eyewear (not shown) can provide a view of a sequence of still images or short videos 806, so as to allow the user 801 to view the object 802 with visual acuity that is better than the user’s view of the continuous path 803.

[286] Similarly, digital eyewear can provide better audio or visual acuity in other contexts. Whenever a user can be assisted by an improved view of an object, or of an augmented reality or virtual reality display, digital eyewear can provide the user with that assisted view. The user can be provided with improved audio/ video acuity when controlling vehicles (such as aircraft, ground vehicles, or watercraft), performing a gaming experience using an augmented reality or virtual reality presentation, participating in or observing sports events, conducting a rapid reaction scenario, observing a rotating (or otherwise repetitive) motion, or otherwise as described herein.

Controlling vehicles

[287] As described herein, the user can benefit from improvements in audio/video acuity when controlling, or assisting with controlling, a vehicle, such as an aircraft, a ground vehicle (whether a two- wheeled vehicle, a four-wheeled vehicle, or a larger vehicle), a water vehicle (whether a surface vehicle, a hydroplane, a subsurface vehicle), or as otherwise described herein. When controlling a vehicle, the user generally directs their attention to a set of distinct sensory inputs, including audio/video inputs involving possible obstacles or other hazards, inputs involving possible traffic instructions, inputs involving possible limitations on user sensory acuity or capacity, inputs involving possible limitations on vehicle operations, or as otherwise described herein.

[288] For example, even a common activity such as driving an automobile can involve audio/video inputs such as hazard warnings (such as a sign labeled “ROAD CONSTRUCTION”), traffic instructions (such as lane instructions, speed limits and traffic lights), limits on user sensory acuity or capacity (such as a sign labeled “BLIND DRIVEWAY”), or limits on vehicle operations (such as signs warning of steep grades). And this does not even begin to cover potential hazards that involve a combination or conjunction of audio /video sensory inputs, such as children playing near the roadway, other vehicles’ brakes or horns, or even internal vehicle warnings such as brake lights and gasoline gauges.

[289] Similarly, even a common activity such as driving an automobile can involve untoward restrictions on user sensory inputs, such as one or more of: large vehicles or terrain blocking the driver’s sight, glare or bright sunlight (particularly near sunrise or sunset), dark or unlit roads (particularly at night or in tunnels), rapid approach of road hazards (such as when passing pedestrians or when being passed by emergency vehicles),

Aircraft

[290] Fig. 8B shows a conceptual drawing of an example use of a digital eyewear system in a aircraft piloting scenario. A user, such as a pilot of an aircraft 811, can observe an airstrip 812. The airstrip 812 can be disposed with markings 813, such as indicating locations where the aircraft 811 must touch down to successfully land (or to successfully reach a selected taxiway) . As the aircraft 811 moves relatively quickly with respect to the airstrip 812, the pilot can have their visual acuity improved by techniques such as those described with respect to the fig. 8A. This can provide the pilot of the aircraft 811 with sufficient information to better or more safely land.

[291] The pilot of the aircraft 811 might also observe traffic 814, thus, other aircraft that might pose a hazard. The traffic 814 might be moving relatively rapidly with respect to the aircraft 811, particularly if the two are approaching directly or even at an angle. Moreover, the pilot’s view of the traffic might be hindered by backlighting 815, such as the sun being behind the traffic 814 with respect to the pilot’s view 816. In such cases, the pilot’s visual acuity can be improved by techniques such as those described with respect to the fig. 8A. [292] In one embodiment, when used in aviation, the user can use the digital eyewear while piloting an aircraft, such as a jet or propeller aircraft, an ultralight aircraft, a glider, a hang-glider, a helicopter, or another vehicle as otherwise described herein. While some of the examples described herein are of particular value for powered aircraft, others are of value for both powered and unpowered aircraft.

[293] In such cases, it might sometimes occur that an object outside the aircraft will be moving relatively rapidly with respect to the aircraft: for some examples, such objects can include other aircraft, buildings or towers, and ground terrain (such as during takeoff or landing) or markers thereon.

[294] For one example, aircraft in the process of takeoff might cross one or more runway markers indicating important information for the pilot. Such information could include limit lines about when the aircraft should exceed certain speeds (sometimes known as “vl” and “v2”) so as to be able to safely lift off the runway before reaching its end, or so as to be able to safely clear obstacles at the end of the runway (such as buildings, telephone wires, or otherwise).

[295] For another example, aircraft in the process of landing might use one or more runway markers indicating other important information for the pilot. Such information could include limit lines about when the aircraft should be able to stop so as to perform a short field landing, or about when the aircraft should be able to stop so as to turn off the runway onto a designated taxiway. In such cases, use of techniques such as those described herein can provide the pilot with improved visual acuity of runway markers or other ground markings, such as by providing an image of the marker that is not blurred by the rapid movement of the aircraft.

[296] For another example, aircraft in the process of taxiing might provide enhanced visual acuity to the pilot with respect to other aircraft moving on the surface of the airport, or about to land on the surface of the airport, so as to provide advance warning to the pilot of the possibly of a collision. When the aircraft is moving at relatively high speed, it can be difficult for the pilot to see other aircraft moving from the side, or oncoming from the rear, or even approaching from in front. Improving the pilot’s visual acuity might be valuable in preventing collisions. In such cases, use of techniques such as those described herein can provide the pilot with improved visual acuity of other aircraft, such as by providing an image of the other aircraft that is not blurred by the rapid relative movement of the two aircraft.

[297] In such cases, it might sometimes occur that the user’s (whether the user is the pilot, co-pilot, or otherwise), audio /video acuity would be more useful if enhanced. For some examples, the user might benefit from better peripheral acuity with respect to objects to the side of the aircraft. For other examples, the user might benefit from better audio acuity with respect to the control surfaces, engine, fuselage, wind, wings, or other aspects of the aircraft. In such cases, use of techniques such as those described herein can provide the pilot with improved audio acuity, such as by providing the pilot with audio inputs that are relatively clear with respect to background noises otherwise present during operation of the aircraft.

[298] For example, aircraft in the air, or on the ground, might be subject to engine trouble, information about which can sometimes be available to the pilot in the form of engine noise, shaking or shuddering in the control surfaces, fuselage, wings, or other portions of the aircraft. Because the pilot might be concentrating on other aspects of controlling the aircraft, this information might be missed due to lack of adequate audio or audio/ video acuity. Improving the pilot’s audio or audio/ video acuity might be valuable in preventing aircraft trouble in the air or on the ground. In such cases, similar to described above, use of techniques such as those described herein can provide the pilot with improved audio acuity, such as by providing the pilot with audio inputs that are relatively clear with respect to background noises otherwise present during operation of the aircraft.

[299] For another example, aircraft in the air, or on the ground, might be subject to other forms of failure, such as weakened control surfaces, weakened portions of the fuselage or wings, or other problems that might be about to occur. Information about such potential problems can sometimes be available to the pilot in the form of noises from the engine, hydraulic or fuel lines, or structural elements of the aircraft. Again, because the pilot might be concentrating on other aspects of controlling the aircraft, this information might be missed due to lack of adequate audio or audio /video acuity. Improving the pilot’s audio or audio/video acuity might be valuable in preventing error by, or failure of, aircraft components that could be preventable. In such cases, similar to described above, use of techniques such as those described herein can provide the pilot with improved audio acuity, such as by providing the pilot with audio inputs that are relatively clear with respect to background noises otherwise present during operation of the aircraft.

[300] For some other examples, the user might benefit from visual acuity with respect to features of local airspace (such as compass direction or GPS location information, radio or traffic beacons, transponder data from other aircraft, or weather and other atmospheric effects). Similarly, the user might benefit from visual acuity with respect to artificially defined features (possibly provided using an augmented reality technology) of local airspace (such as air traffic control zones, air travel guidelines, defined airway travel paths, glide path guidelines, or noise control guidelines). In such cases, use of techniques such as those described herein (in combination or conjunction with augmented reality) can provide the pilot with additional information and improved visual acuity with respect to that additional information, such as by providing one or more augmented reality images of information useful to the pilot when operating the aircraft.

[301] For example, while the pilot is generally aware of the aircraft’s magnetic heading, by use of aircraft instruments, it might not be as easy for the pilot to determine a magnetic heading of a direction in which the pilot wishes to go. Similarly, the direction and position of radio beacons, runway headings, terrain, and other landmarks (including such information as the height of mountains and other landmarks), can be valuable to the pilot to be aware of without having to measure them on a map or otherwise request information from other sources (such as ATC, air traffic control). In such cases, the pilot would benefit from being able to identify those landmarks, such as using a HUD (headsup display) or an augmented reality environment, which could provide that information. In such cases, use of techniques such as those described herein (in combination or conjunction with augmented reality) can provide the pilot with additional information and improved visual acuity with respect to that additional information, such as by providing one or more augmented reality images of information useful to the pilot when operating the aircraft.

[302] For another example, particularly when controlling a glider, the pilot could find it valuable to have information with respect to weather reports, weather sightings, updrafts or downdrafts, and the oxygenation level of the aircraft cabin. While some aircraft have instruments that provide this information, not all do. Accordingly, the pilot might benefit from additional information with respect to these identifiable, yet difficult to see, aspects of the environment outside the aircraft. In such cases, use of techniques such as those described herein (in combination or conjunction with augmented reality) can provide the pilot with additional information and improved visual acuity with respect to that additional information, such as by providing one or more augmented reality images of information useful to the pilot when operating the aircraft.

[303] For another example, particular when operating in or near a controlled airspace, the pilot could benefit from information, such as available in an augmented reality environment, with respect to the location and limits of ATC (air traffic control) zones, glide path guidelines, noise abatement guidelines (including designated pathways to follow for noise abatement), and tower instructions (including designated pathways to follow according to those tower instructions). Similarly, certain known airports (such as the LAX “highway in the sky”) provide designated volumes which aircraft are allowed to traverse without tower check-in; an augmented reality environment can provide aircraft pilots with enhanced visual acuity with respect to the tower and the volumes designated for such behavior. In such cases, use of techniques such as those described herein (in combination or conjunction with augmented reality) can provide the pilot with additional information and improved visual acuity with respect to that additional information, such as by providing one or more augmented reality images of information useful to the pilot when operating the aircraft.

[304] In such cases, it might sometimes occur that the user’s video acuity might be enhanced with the elimination or reduction of the effect of glare, reflection, sunlight, or other sources of bright or distracting light. For some examples, the user might benefit from better visual acuity in the contexts of one or more of: heading toward the sun (either directly or at a small angle); heading over bodies of water, cloud cover, or similarly reflecting surfaces; climbing toward higher altitude (where the sky can be substantially brighter than ground terrain). In such cases, use of techniques such as those described herein can provide the pilot with enhanced visual acuity and the ability to better operate the aircraft even in environments that are subject to degrading visual effects.

[305] For example, in addition to sources of bright or distracting light, the pilot might find it difficult to identify backlit outside aircraft, backlit landmarks (such as buildings or towers), or other hazards. In such cases, the pilot could benefit from improved visual acuity with respect to such objects outside the aircraft. In such cases, use of techniques such as those described herein can provide the pilot with enhanced visual acuity and the ability to better operate the aircraft even in environments that are subject to degrading visual effects.

[306] For another example, in addition to sources of bright or distracting light, the pilot might have their visual acuity reduced by effects due to the transition between nighttime to daytime, or between daytime to nighttime. Accordingly, the pilot might benefit from improved visual acuity due to reduction in the transition between nighttime to daytime, or between daytime to nighttime, or other amelioration between ambient brightness levels that might occur. Such cases might include circumstances in which a relatively bright light that was otherwise obscured (such as the sun being obscured by mountainous terrain) becomes unobscured (such as by the aircraft moving to a position where the mountainous terrain no longer blocks the sun). Similarly, such cases might include circumstances in which a relatively bright light comes into view or is otherwise altered, such as airport lights that are turned when it becomes nighttime. In such cases, use of techniques such as those described herein can provide the pilot with enhanced visual acuity and the ability to better operate the aircraft even in environments for which it is difficult for the pilot to adjust to changing lighting conditions.

Driving and other vehicles

[307] In one embodiment, when used in driving on land (including race car driving), piloting a vehicle on water (including a sailboat or yacht, speedboat or hydroplane, or water skis), or operating another type of vehicle (including a snow sport vehicle such as skis, a sled, a snowboard, a snowmobile, a toboggan or luge, or a similar vehicle), or another type of vehicle, the user can use the digital eyewear while piloting a vehicle.

[308] When operating a ground vehicle (such as a racing car, an automobile, a truck, an all-terrain vehicle, a camper or recreational vehicle, a motorcycle, a dirt bike, a bicycle or unicycle, or otherwise as described herein), the driver might sufficient visual acuity to identify operating hazards. These operating hazards can include ambient or upcoming lighting or lighting changes, ambient or upcoming weather, noise concerns, road curves or other road changes, road information, other vehicles, terrain hazards, wildlife and other nonvehicle hazards, or otherwise as described herein.

Tunnels and other darkened regions

[309] As described herein and as sometimes described in the Incorporated Disclosures, when a vehicle enters or exits a relatively dark or enclosed tunnel, the operator’s audio /visual acuity can be degraded by sudden entry or exit with respect to an environment that has significant differences in available light and sound. In such cases, use of techniques such as those described herein can provide the pilot with enhanced visual acuity and the ability to better operate the aircraft even in environments for which it is difficult for the pilot to adjust to changing audio/ video conditions.

[310] For example, the operator’s visual acuity can be degraded by sudden entry into a region where light is significantly reduced. This can reduce the operator’s ability to see driving conditions, such as road curves or surfaces, other vehicles, obstacles or potholes in the road, or other hazards. The operator’s inability to clearly see driving hazards can prompt the operator to slow down, which can have serious effects with respect to racing or with respect to travel time.

[311] For another example, the operator’s visual acuity can also be degraded by sudden exit from a region where light is significantly reduced, thus sudden entry into a region where light is much brighter than the just- earlier region. This can also reduce the operator’s ability to see driving conditions, such as those described above. Similarly, the operator’s inability to clearly see driving hazards can prompt the operator to slow down, which can have serious effects with respect to racing or with respect to travel time.

[312] Similarly, when a vehicle enters or exits a relatively enclosed tunnel, the operator’s audio acuity can be degraded by sudden entry or exit with respect to the environment with significant differences in available sound. For example, a relatively enclosed tunnel can degrade the operator’s audio acuity, such as their ability to hear relatively softer sounds or to distinguish between similar sounds. This can be an issue for the operator when attempting to determine whether the vehicle is close to a wall, another vehicle, or a different obstacle. Similar to other issues in which the operator’s visual acuity is degraded, the operator’s audio acuity being degraded can prompt the operator to slow down, which can have serious effects with respect to racing or with respect to travel time.

[313] Similarly, when a vehicle enters or exits a relatively darkened region, such as a region that becomes (whether gradually or suddenly) in shadow, the operator’s visual acuity can be degraded by the change in lighting level. For example, entering or exiting a relatively darkened region can reduce the operator’s ability to see driving conditions, such as those described above.

Other driving hazards

[314] Other driving hazards can also have an effect on the operator’s audio /video acuity, and consequent ability to operate the vehicle, either at speed (such as for a racing car), with relative safety (such as for an automobile or truck), with relative sporting skill (such as for a motorcycle, dirt bike, or bicycle), or otherwise as described herein.

[315] Fig. 8C shows a conceptual drawing of an example use of a digital eyewear system in a driving scenario. A user, such as a operator of a vehicle 821 travelling in a selected direction 822 can observe a road 823. The road 823 can be disposed with markings 824, such as indicating upgrades or downgrades, left or right turns, road banking, or other information of value to drivers. The road 823 might also have hazards that the operator might advantageously wish to know, such as debris 825 on or near a lane in which the vehicle 821 is travelling, or such as traffic 826 in the form of other vehicles 826, possibly travelling in the same or a different direction 827. As the vehicle 821 might move relatively quickly with respect to the road 823, the operator can have their visual acuity improved by techniques such as those described with respect to the fig. 8A. This can provide the operator of the vehicle 821 with sufficient information to better or more safely travel.

[316] For another example, when operating a motor vehicle, it might sometimes occur that the operator can see the wheels of a nearby motor vehicle. In the normally reduced visual acuity that can occur due to the rapid rotation of those wheels, the operator often sees a blurred view of those wheels, with little ability to determine their rotation speed relative to the motor vehicle of which the operator is in control. In such cases, use of techniques such as those described herein can provide the operator with enhanced visual acuity with respect to the rotation of those wheels, possibly with a superior ability to determine a rate of rotation and thus a speed of the nearby other vehicle.

[317] For another example, when operating a motor vehicle, the presence of relatively rapid road curves or other road changes, such as upgrades or downgrades, tilting of the road to the right or left, and the presence of terrain hazards (including blown-up tires, litter, potholes, and large trash that might have fallen from trucks), can pose a hazard. In such cases, the operator of the vehicle can benefit from enhanced audio/video acuity, particularly with respect to hearing or seeing objects in the path of the vehicle. In such cases, use of techniques such as those described herein can provide the operator with enhanced visual acuity and the ability to better operate the vehicle even when rapid changes in the road might otherwise make it difficult for the operator to adjust to changing road conditions.

[318] For another example, when operating a motor vehicle, the presence of slippery portions of the road can pose a hazard. In such cases, the operator of the vehicle can benefit from enhanced audio/video acuity, particularly with respect to hearing or seeing objects or substances (such as oil or water) in the path of the vehicle. In such cases, use of techniques such as those described herein can provide the operator with enhanced visual acuity and the ability to better operate the vehicle even when rapid changes in the road might otherwise make it difficult for the operator to adjust to changing road conditions.

[319] For another example, when operating a motor vehicle, the presence of ambient or upcoming weather can pose a hazard. In such cases, the operator of the vehicle can benefit from enhanced audio/ video acuity, particularly with respect to hearing or seeing aspects of weather, road conditions in weather, or other effects on driving conditions from weather. In this context, “weather” can include fog, mist, rain, or other effects of current or upcoming precipitation; lightning, thunder; or otherwise as described herein. In such cases, use of techniques such as those described herein can provide the operator with enhanced visual acuity and the ability to better operate the vehicle even when weather degrades the operator’s ability to determine road information.

[320] Techniques described herein can provide the operator with improved ability to see road conditions and road signs, even when those road conditions and road signs are obscured by precipitation. Techniques described herein can also provide the operator with improved ability to see road conditions and road signs, even when the operator’s ability to see those road conditions and road signs is degraded by lightning or nightlighting. Techniques described herein can also provide the operator with improved ability to hear sounds that might provide information with respect to road conditions or other vehicles (such as other automobiles, trucks, or railroad cars), even when the operator’s ability to hear those sounds is degraded by precipitation, wind, thunder, or as otherwise described herein.

[321] For another example, when operating a motor vehicle, the presence of terrain hazards (including road tilting or turning, or the presence of wildlife and other nonvehicle hazards), can pose a hazard to the operator or passengers of the vehicle. In this context, “road tilting or turning” can include any change in aspects of the road that might have an effect on driving conditions, such as rapid turns to one side, steep banking of the road, steep rises or declines, or other road gradients; speed bumps; changes in road surfaces (such as changes in paving), or otherwise as described herein. In this context, “terrain hazards” can include the presence of wildlife or objects on the road that might have an effect on driving conditions, such as deer crossing, falling rocks, possible flooding, or otherwise as described herein. Although not wildlife, certain areas and roads are sometimes subject to unexpected crossing by persons who are seeking to travel, such as near an international border.

[322] Techniques described herein can provide the operator with improved ability to determine the current or oncoming presence of such terrain hazards, including by presenting the operator with improved visual acuity of oncoming terrain hazards (such as road gradients or wildlife), improved audio acuity of current terrain hazards (such as road paving or flooding), and otherwise as described herein.

Gaming features

[323] In one embodiment, another driving feature can include an augmented reality or virtual reality experience in response to a driving exercise by another driver. For example, another driver can include an expert race car driver, motorcyclist, dirt biker, or bicyclist. A set of audio/video recording equipment (and possibly other sensory recording equipment, such as haptic recording equipment or olfactory recording equipment) can provide a record of the expert performing a driving exercise. A non-expert can experience the expert driving exercise without having to be an expert themselves; moreover, the non-expert can experience the expert driving exercise without risk that might be associated by a non-expert performing that same driving exercise.

[324] For example, a non-expert can be entertained by the expert driving experience without needing the skill, practice, equipment, or risk associated with the expert driving experience. The non-expert need not travel to the location where the expert driving experience is performed and need not worry about obstructing other drivers (whether expert or non-expert) driving the same course. Moreover, the non-expert can use enhanced audio/video acuity to gain greater enjoyment from the expert driving experience, without concern that looking at scenery, focusing on capabilities of the vehicle, or losing focus on the driving task at hand, will be untoward. [325] For another example, a non-expert can take advantage of the expert’s skills and familiarity with the particular course, just as if the non-expert were as familiar with that course as with their daily working commute. In such cases, the non-expert might be entertained or interested in following friends, celebrities, known experts, or their own past experience. The non-expert thus could practice and/or train using the work of known experts or their own past experience; could enjoy the same experience as their friends or their favorite celebrities; or could share knowledge about their experiences with their friends and / or teammates.

[326] For another example, a common set of vehicle operators who drive in a related area could be supervised by a more experienced vehicle operators, such as a delivery or taxi driver who has been working in the area for a significant amount of time reviewing the work of relative newcomers. In such cases, the supervisor could provide assistance and helpful hints to the newcomers, could grade those newcomers with respect to their skill development, and could compare those newcomers with respect to their skill development.

[327] For another example, a course developer could gamify the experience by providing one or more persons of differing skill levels to provide a course -driving experience. Non-experts (and experts) could compete on that course to see who is able to create the best (fastest, safest, most interesting or scenic) experience with driving the course. This could be combined with allowing players to alter the equipment they use when driving the course. Thus, for example, a player could score more points when correctly following an expert’s pre-recorded experience and fewer points when failing to correctly follow that pre-recorded experience. For another example, a player could score more points when completing the course in less time or with less risk, or by providing a more entertaining or exciting experience. For another example, a player could score more points when completing the course with less versatile equipment and fewer points when completing the course with more versatile equipment.

[328] While this Application primarily describes operating a vehicle in an augmented reality or virtual reality environment, on a course modeled on a real-world environment, in the context of the invention, there is no particular requirement for any such limitation. For example, the course can be modeled on a real-world course for which legal or practical restrictions prevent access (such as a mountain biking trail across Mt. Everest). For another example, the course can be modeled on an artificial course that might be real but has never been built (such as a motor cross event participating in a running of the bulls in Madrid, or a similar event held on Mars). For another example, the course can be modeled on an artificial course that is not believed to be physically possible, such as operating a spacecraft in the “Death Star” run in the movie “Star Wars”. For another example, the course can be modeled on an artificial course that uses laws of physics that are known to be false, such as operating a character in a variant of the video game “Super Mario” or another such video game.

[329] While this Application primarily describes operating a vehicle in an augmented reality or virtual reality environment, in the context of the invention, there is no particular requirement for any such limitation. For example, the augmented reality or virtual reality environment can be modeled on an environment in which the player does not use a vehicle, such as a fantasy environment, or such as an historic environment, or such as a real- world environment in which the player is not operating a vehicle, such as a law enforcement or emergency responder environment (as otherwise and further described herein).

[330] While this Application primarily describes operating a vehicle in a real, artificial, or augmented environment, in the context of the invention, there is no particular requirement for any such limitation. For example, the user can learn a skill by performing that skill with respect to a real, artificial, or augmented, environment, with an inspector, supervisor, or teacher. There are many skills that are difficult to teach, except by observing the student and approving/ disapproving particular examples of their performance. Examples include one or more of: baking or cooking, ballet or dancing, conducting a medical examination, construction work, interrogating witnesses, performing gymnastics or other athletic skills, performing surgery, piloting an fighter aircraft, playing a musical instrument, playing a sport (such as baseball, basketball, football, golf, or soccer), playing master-level chess, playing poker, recognizing deception, safely performing law enforcement work, sexing eggs, singing (alone or with a group), or other skills not easily represented in any symbolic form. Water hazards

[331] Water vehicles (such as motorboats, sailboats, speedboats) can involve hazards that have an effect on the operator’s audio /video acuity, and their consequent ability to operate the vehicle. For example, water can reflect sunlight to produce glare, which can affect the operator’s ability to see objects, particularly those objects obscured by the glare, in the same direction as the glare, or at a distance. In such cases, techniques such as described herein can be used to ameliorate the effect of glare or other sunlight or brightness effects, such as to improve the user’s visual acuity and allow the user to operate the vehicle at greater speed, with lesser risk, and with better maneuverability.

[332] For example, water vehicles can also be subject to water hazards, such as underwater obstacles (branches, plants, rocks, and/or otherwise as described herein), or such as surface obstacles (buoys, other vehicles, and/or otherwise as described herein). These obstacles might not be easily discernable to the vehicle operator from a distance or otherwise, possibly due to degraded visual acuity as otherwise described herein, or possibly due to degraded visual acuity in response to murky water or other sight restrictions. In such cases, techniques such as described herein can be used to ameliorate the degradation in the user’s visual acuity, such as by preventing the user from being subject to glare, or such as by allowing the user to obtain superior views of underwater or otherwise obscured objects.

Sports events

[333] As described herein, one possible use of digital eyewear can use techniques described with respect to fig. 8A.

[334] In one embodiment, another feature can include participation in a sporting event, as otherwise and further described in the Incorporated Disclosures. For example, a player with improved audio/video acuity can be disposed to catch or otherwise respond to an incoming ball in a sporting event, such as a baseball, basketball, football, hockey puck, jai alai ball, soccer ball, tennis ball, or otherwise as described herein. In such cases, when the incoming ball is difficult to see because of backlighting (such as by the sun or by sporting arena lighting), the techniques described herein can be used to ameliorate any degradation in visual acuity by a player.

[335] In one embodiment, another feature can include participation in a sporting event, as otherwise and further described in the Incorporated Disclosures. For example, a player with improved audio/video acuity can be disposed to hit, throw, or otherwise provide an outgoing ball in a sporting event, such as a ball as described above, or a golf ball, hockey puck, jai alai ball, or otherwise as described herein. In such cases, when the player’s target is difficult to see because of lighting conditions, such as backlighting as described herein, the techniques described herein can be used to ameliorate any degradation in visual acuity by a player.

[336] In one embodiment, another feature can include participation in sporting event such as a high-speed race, a rodeo event, a shooting event (including skeet shooting), a skiing event (including downhill racing or slalom competition) , or otherwise as described herein. Techniques described herein can provide the user with enhanced audio/video acuity; this can have the effect that the sporting participant can better avoid obstacles or other risks associated with the sport, particularly when the sport is performed at relatively high speed.

[337] In one embodiment, another feature can include observation of a sporting event, as otherwise and further described in the Incorporated Disclosures. For example, an observer (such as a coach, scout, spectator, or as otherwise described herein) might have their visual acuity degraded by lighting, such as described herein, or by having a viewing target obscured by another object, as otherwise and further described herein. In such cases, when the observer’s target is difficult to see because it is partially obscured, techniques described herein can be used to ameliorate any degradation in visual acuity by an observer.

Rapid reaction [338] In one embodiment, another feature can include a user who is involved in rapid-response or life-critical decisions, such as firefighting personnel, search/ rescue personnel, emergency responders (including EMTs and medical personnel), emergency room personnel (including medical personnel and their assistants), law enforcement personnel, military personnel, and other personnel described herein. For example, users who are involved in such decisions are often required to make rapid decisions or life- critical decisions with limited information: (A) law enforcement officers and military personnel are sometimes involved in shoot/don’t-shoot decisions with respect to particular targets, whether with respect to firearms, tasers, or other weaponry; (B) emergency responders and emergency room personnel are sometimes involved in choice -of- care decisions; (C) firefighting personnel and search/ rescue personnel are sometimes involved in decisions involving whether individuals are self-capable or need assistance in rescue operations. In such cases, personnel could benefit from enhanced audio/video acuity, such as being able to determine whether particular objects in their field of view are of special danger or interest.

Law enforcement

[339] For example, a law enforcement officer who might be engaged in apprehending a suspect would prefer to have substantial confidence with respect to whether the suspect is carrying a lethal weapon, such as a pistol or a knife. Mistaking an innocuous object, such as a cell phone for a lethal weapon might lead the law enforcement officer to use lethal force on the suspect (such as shooting the suspect) when this is unnecessary and could lead to the unnecessary death or injury of the suspect. Similarly, mistaking a lethal weapon for an innocuous object might lead the law enforcement officer to fail to use force on the suspect and could lead to the unnecessary death or injury of the law enforcement officer. Accordingly, rapid identification by law enforcement officers of objects in possession of suspects is desirable.

[340] In such cases, techniques described herein can provide the law enforcement officer with enhanced audio/video acuity, so as to better perceive the distinction between a pistol and a cell phone (such as when the suspect is removing the object from a pocket). Using techniques described herein, the law enforcement officer can obtain a more detailed image of the object as the suspect is moving it, instead of having to maintain a complete review of the suspect, other objects in the near vicinity, and the possibly dangerous object, all at the same time. For example, techniques described herein can provide the law enforcement officer with a sequence of images, each associated with an artificial intelligence evaluation or machine learning evaluation of an amount of attention the law enforcement officer should direct to that object. In such cases, if the suspect moves in a threatening manner while concurrently removing an innocuous object from their pocket, an artificial intelligence technique or a machine learning technique can warn the law enforcement officer of the suspect’s relatively threatening move while concurrently assuring the law enforcement officer of the relative innocuousness of the object the suspect is holding.

[341] Similarly, when a law enforcement officer meets a suspect, techniques described herein can use artificial intelligence techniques or machine learning techniques to perform facial recognition of the suspect, so as to determine whether the suspect has any outstanding warrants, is a known violent offender, or is otherwise dangerous to the law enforcement officer or to innocent bystanders. In such cases, an augmented reality or virtual reality system can alert the law enforcement officer with respect to that information.

[342] Similarly, when a law enforcement officer meets a suspect, techniques described herein can use artificial intelligence techniques or machine learning techniques to perform facial recognition of micro-expressions by the suspect, so as to determine if the suspect is manifesting violent emotion likely to lead to an armed confrontation with the law enforcement officer, or otherwise dangerous to the law enforcement officer or to innocent bystanders. In such cases, an augmented reality or virtual reality system can alert the law enforcement officer with respect to that information.

Emergency patient treatment

[343] For example, an emergency responder or emergency room personnel providing care for a patient would prefer to have substantial confidence with respect to whether the patient is subject to a life-threatening medical condition, and if so, which one. Mistaking a life-threatening medical condition for an ordinary patient effect might lead the emergency responder or emergency room personnel to fail to use emergency techniques with respect to the medical condition. Similarly, mistaking an ordinary patient effect for a life-threatening medical condition might lead the emergency responder or emergency room personnel to address a mistaken priority with respect to the patient’s care, or to fail to address a more serious patient medical condition.

[344] In such cases, techniques described herein can provide the emergency responder or emergency room personnel with enhanced audio/video acuity, so as to better perceive whether the patient is subject to one or more life-threatening medical conditions. Using techniques described herein, the emergency responder or emergency room personnel can obtain a relatively rapid assessment of the patient’s “ABCDE” factors (airway, breathing, circulation, disability, and exposure), so as to address the most lifethreatening issues with respect to the patient.

[345] In one example, techniques described herein can use an artificial intelligence technique or a machine learning technique to recognize whether the patient has an obstructed airway or is breathing satisfactorily, so as to alert the emergency responder or emergency room personnel with respect to that information. Similarly, techniques described herein can use an artificial intelligence technique or a machine learning technique to recognize whether the patient has adequate circulation or is bleeding significantly, so as to alert the emergency responder or emergency room personnel with respect to that information. Similarly, techniques described herein can use known techniques to alert the emergency responder or emergency room personnel with respect to other important patient information, such as whether the patient shows signs of exposure, has a “medic alert” bracelet or other indicator of allergy or a medical condition affecting treatment, or otherwise as described herein.

Firefighting or search/ rescue

[346] For example, firefighting personnel responding to an emergency would prefer to have substantial confidence with respect to (A) whether any potential victims are present in a fire zone, (B) whether particular regions of a building or other structure remain sound and capable of carrying the weight of firefighting personnel, or otherwise as described herein. Similarly, search / rescue personnel responding to a possible victim in need of location or rescue would prefer to have substantial confidence with respect to (C) whether movements at a distance in the search/ rescue personnel’s field of view are those of possible victims or are instead irrelevant to the search/ rescue operation.

[347] In such cases, techniques described herein can provide the firefighting personnel with enhanced audio /video acuity, so as to better perceive the scope and severity of a fire zone, the presence of potential victims in that fire zone, the possibility of that fire zone threatening the structural integrity of a building, and otherwise as described herein. Using techniques described herein, the firefighting personnel can obtain a relatively rapid assessment of these firefighting factors, so as to address the most important issues with respect to the fire.

[348] In one example, techniques described herein can use an artificial intelligence technique or a machine learning technique to (A) identify a heated region (such as appearing in an infrared frequency spectrum) corresponding to a scope or severity of a fire, (B) to identify a shape (such as appearing in an infrared or visual frequency spectrum) corresponding to a person, and/or (C) to identify audio/video information corresponding to a relatively weakened building structure. This information can alert the firefighting personnel with respect to the fire, any potential victims, and likely safe routes of travel within the fire zone. Similarly, techniques described herein can use an artificial intelligence technique or a machine learning technique to recognize audio/video information corresponding to calls for help from potential victims or from persons unaware they are at risk.

Other critical or life-threatening circumstances

[349] For example, techniques described herein can be useful to any other personnel involved in other critical or life-threatening circumstances, or in which critical or lifethreatening circumstances. Such cases can include military personnel, bomb-defusing operations, industrial accident prevention, and otherwise as described herein. Tuned audio/ video acuity frequency

[350] Fig. 8D shows a conceptual drawing of an example use of a digital eyewear system in a scenario including an object having repetitive motion, such as rotation. A user 831, can observe an object 832, such as a wheel or another exhibiting rotation 833 or another repetitive motion. When the object 832 is rotating relatively quickly, the user’s view 834 of the object is likely to be blurred or otherwise disposed so as to prevent the user 831 from having adequate visual acuity with respect thereto. For example, when the object 832 is rotating relatively quickly, the user 831 might have difficulty seeing defects 835 or other details with respect thereto.

[351] As described herein, when the object 832 is exhibiting rotation 833 or another repetitive motion, the digital eyewear can be tuned so as to match a frequency of the rotation. For example, if a wheel is rotated at 60 miles/hour, thus, about 700 rota- tions/minute, the digital eyewear can be adjusted so as to provide a fixed number of images at that frequency, or at a multiple thereof. In such cases, when the wheel is rotated at 700 rotations/ minute and the digital eyewear provides the user 831 with 350 images/ minute, the user should see the defect 835 every other one of those rotations, each time at the same apparent location. Thus, the defect 835 will appear to be unmoving in the user’s view 834, even though it is actually rotating at a relatively high speed.

[352] In one embodiment, another feature can include a user who is involved in observation or repair of an object that is relatively rapidly moving object while in operation but for which the user desires to closely examine, such as for cracks, dents, other damage, loose attachment, maladjustment, misalignment, or other actual or potential error. For example, the user might wish to examine a rotating wheel to determine whether that object is properly centered. For another example, the user might wish to examine an object that is positioned on a rotating lathe to determine whether that object has any scratches or other damage. For another example, a user might wish to examine a machine part, such as a turbine blade, while in operation, to determine whether that object is cracked or is misaligned relative to other blades in the same turbine. For another example, a user might wish to examine an engine to determine whether it is emitting any unexpected sounds or other audio evidence of damage or mistuning, or whether it is exhibiting signs of being about to fail.

[353] In such cases, the user’s audio and / or visual acuity can be improved by tuning a frequency of the digital eyewear (or digital earwear) so as to match the frequency of the moving object, or to match a harmonic frequency thereof, so as to operate in synchrony therewith. For example, if a turbine blade rotates at 1,000 rotations /minute, tuning the digital eyewear to operate at the same frequency, or a harmonic thereof, should allow a user to view that object so as to appear substantially stationary. Similarly, if a wheel is rotating so as to allow a vehicle to proceed at 60 miles per hour, tuning digital eyewear to that same frequency, or a harmonic thereof, should allow the user to view that wheel so as to appear as if it is substantially not rotating. In such cases, the user should be able to inspect the object more closely and determine fine details of the object while the object is rotating.

[354] Similarly, if an engine is emitting a sound intermittently at a selected frequency, tuning digital earwear (thus, earphones that periodically interrupt the audio signal the user is able to hear) to allow the user to hear that audio signal at the same frequency should allow a user to hear that intermittent sound in a relatively continuous manner. Alternatively, if an engine is emitting a sound at a frequency above or below a human hearing range, tuning digital earwear with respect to that frequency can allow the user to hear that audio signal in a human hearing range. In such cases, the user should be able to inspect the audio signal from the object more closely and determine fine details of that audio signal even while the object is making other and possibly louder noises.

Dynamic eye tracking

[355] In one embodiment, the digital eyewear can be disposed to operate either with or without dynamic eye tracking. For example, with dynamic eye tracking, the digital eyewear can be disposed to identify a selected object at which the user is looking, and to select a frequency at which to operate so as to maximize the user’s audio and/or visual acuity with respect to that particular object. If the object is a ball in a sports event, the digital eyewear can be disposed to improve the user’s audio and/or visual acuity with respect to that particular object, or with respect to that object’s particular speed and direction of travel vis-a-vis the user.

[356] For another example, without dynamic eye tracking, the digital eyewear can be disposed to operate with respect to the user’s entire field of view, so as to improve the user’s audio and / or visual acuity with respect to an ambient environment, rather than with respect to a particular selected object. In such cases, the digital eyewear can be disposed so as to remove a selected distraction from the user’s ambient environment, without having to determine in which direction or at what focal length the user is looking. Similarly, in such cases, the digital eyewear can be disposed so as to improve the user’s audio and/or visual acuity at a selected frequency, without having to determine in which direction or at what focal length the user is looking.

ALTERNATIVE EMBODIMENTS

[357] Although this Application primarily describes one set of preferred techniques for digital visual optimization, in the context of the invention, there is no particular requirement for any such limitation. Other techniques for digital visual optimization, and related matters, would also be workable, and are within the scope and spirit of this description. After reading this Application, those skilled in the art would be able to incorporate such other techniques with the techniques shown herein.