Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
USER INTERFACE SYSTEMS FOR HEAD-WORN COMPUTERS
Document Type and Number:
WIPO Patent Application WO/2017/177122
Kind Code:
A1
Abstract:
A frame is mechanically adapted to position a see-through optical computer display in front of a user's eye. The frame includes an arm adapted to hold the frame to the user's head, a rotary dial mounted on the arm and accessible to the user, and a direction selection device mounted on the arm proximate the rotary dial. The rotary dial is adapted to move a graphical selection element either laterally or vertically within a graphical user interface, wherein the movement direction is based on a selection setting of the selection device.

Inventors:
OSTERHOUT RALPH F (US)
LOHSE ROBERT MICHAEL (US)
RAMAIAH DASHNAMOORTHY (US)
SHAMS NIMA L (US)
MAHESHAIAH SUHAS (US)
BORDER JOHN N (US)
BIETRY JOSEPH (US)
MIHAYLOV TODOR (US)
NORTRUP EDWARD H (US)
HADDICK JOHN D (US)
Application Number:
PCT/US2017/026577
Publication Date:
October 12, 2017
Filing Date:
April 07, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
OSTERHOUT GROUP INC (US)
International Classes:
G06F1/16; G02B27/01; G06F3/02; G06F3/033; G06F3/0362; G06F3/038; G06F3/0482
Foreign References:
US9292082B12016-03-22
US20070229458A12007-10-04
US20010017614A12001-08-30
US6297795B12001-10-02
US8451229B22013-05-28
US7042441B22006-05-09
US20150309317A12015-10-29
Attorney, Agent or Firm:
IRIZARRY, Stacey et al. (US)
Download PDF:
Claims:
Claims

We claim:

1. A head-worn computer, comprising:

a frame mechanically adapted to position a see-through optical computer display in front of a user's eye, wherein the frame includes an arm adapted to hold the frame to the user' s head;

a rotary dial mounted on the arm and accessible to the user;

a direction selection device mounted on the arm proximate the rotary dial; and the rotary dial adapted to move a graphical selection element either laterally or vertically within a graphical user interface, wherein the movement direction is based on a selection setting of the selection device.

2. The head-worn computer of claim 1, wherein the rotary dial includes a plurality of mechanical stops adapted to cause the rotary dial to stop turning in increments to correspondingly cause the graphical selection element to pause on a next selectable object in the graphical user interface.

3. The head-worn computer of claim 1, wherein the direction selection device causes the graphical selection element to vertically when activated a first time and horizontally when activated a second time.

4. The head-worn computer of claim 1, wherein the graphical user interface includes a plurality of selectable elements, wherein the graphical selection element snaps to a next selectable element in the plurality of selectable elements, and wherein the direction of the next selectable element is based on a selection setting of the direction selection device.

5. The head-worn computer of claim 1, wherein the graphical user interface includes a continuously scrollable environment, wherein the direction of the scroll in the graphical user interface is dependent on a selection setting of the direction selection device.

6. The head-worn computer of claim 1, further comprising a haptic feedback system adapted to provide a level of haptic feedback corresponding to a turn of the rotary dial.

7. The head-worn computer of claim 6, wherein the haptic feedback is provided by a system that includes a plurality of haptic strips.

8. The head-worn computer of claim 6, wherein the level of haptic feedback is variable based on an interaction with the rotary dial.

9. The head-worn computer of claim 6, wherein the level of haptic feedback is variable based on an interaction with direction selection device.

10. The head-worn computer of claim 6, wherein the level of haptic feedback is variable based on a direction of turn of the rotary dial.

11. The head-worn computer of claim 1, further comprising a haptic feedback system adapted to provide a level of haptic feedback corresponding to an activation of the direction selection device.

12. The head-worn computer of claim 11, wherein the haptic feedback is provided by a system that includes a plurality of haptic strips.

13. The head- worn computer of claim 11, wherein the level of haptic feedback is variable based on an interaction with the rotary dial.

14. The head-worn computer of claim 11, wherein the level of haptic feedback is variable based on an interaction with direction selection device.

15. The head- worn computer of claim 11, wherein the level of haptic feedback is variable based on a direction of turn of the rotary dial.

16. The head- worn computer of claim 1, wherein the rotary dial is adapted to accept a press towards the center of the dial as a selection of an item.

17. The head-worn computer of claim 1, wherein the direction selection device is adapted to accept a type of interaction that causes the selection of an item in the graphical user interface.

18. The head-worn computer of claim 17, wherein the type of interaction is a length of selection time.

19. The head-worn computer of claim 17, wherein the type of interaction is a pattern of user interactions.

20. The head-worn computer of claim 19, wherein the pattern includes a plurality of quick activations.

21. A head-worn computer, comprising:

an inertial measurement unit ("IMU") in communication with a processor; the processor adapted to identify a user tap control action, the user tap control action being a finger tap on a frame of the head-worn computer, wherein the finger tap is measured by the IMU; and

the processor further adapted to control an aspect of a software application operating on the head-worn computer.

22. A method for synchronizing content from the cloud between multiple head- worn computers to provide a synchronized experience to multiple users of head-worn computers, comprising:

linking the multiple users to the same access point in the cloud;

identifying how many head- worn computers will be included in the synchronized experience of the content, the multiple users indicating to the cloud thai they would like to participate in a synchronized experience of the content;

downloading the content to the multiple head-worn computers from the cloud; polling the multiple head-worn computers to determine the percentage of the content that has been downloaded to each of multiple head-worn computers; and when all of the multiple head-worn computers has exceeded a predetermined percentage of content that has been downloaded, sending a start command to each of the head-worn computers simultaneously to begin a synchronized presentation of the content to all of the multiple users.

23. An optical assembly for displaying an image in a head- worn display comprising:

an image source providing image light;

display optics including multiple elements with optical surfaces that are cemented together with one or more transparent adhesives;

wherein the cemented display optics include multiple internal optical surfaces comprising at least one refractive surface supplying optical power and at least two partially reflective surfaces; and

the cemented display optics present the image light in a display field of view to display the image to a user.

24. A head- worn computer, comprising:

a see-through display wherein computer content is presented to a user wearing the head- worn computer and through which the user sees a surrounding environment,

wherein the see-through display generates image light comprising narrow bandwidths of red, green and blue light and wherein the see-through display further includes a tristimulus notch mirror positioned to reflect the image light towards the user' s eye, and wherein the tristimulus notch mirror reflects less than a full width half max of the red image light.

Description:
USER INTERFACE SYSTEMS FOR HEAD- WORN COMPUTERS Claim to priority

[0001] This application claims the benefit of the following provisional applications, each of which is hereby incorporated by reference in its entirety: United States Application Number 15/149,456, filed May 9, 2016 (ODGP-1014-U01); United States Application Number 15/242,893, filed August 22, 2016 (ODGP-1015-U01); United States Application Number 15/334,412, filed October 26, 2016 (ODGP-1016-U01); United States Application Number 15/162,737, filed May 24, 2016 (ODGP-2027-U01); United States Application Number 15/259,465, filed September 8, 2016 (ODGP-2031-U01); United States Application Number 15/397,920, filed January 4, 2017 (ODGP-2033-U01); United States Application Number 15/094,039, filed April 8, 2016 (ODGP-2026-U01); United States Application Number 15/155,476, filed May 16, 2016 (ODGP-3021-U01); and United States Application Number 15/170,256, filed June 1, 2016 (ODGP-4010-U01).

Background

Field of the Disclosure

This disclosure relates to user interfaces for head-worn computer systems. Description of Related Art

[0002] Head mounted displays (HMDs) and particularly HMDs that provide a see-through view of the environment are valuable instruments. The presentation of content in the see-through display can be a complicated operation when attempting to ensure that the user experience is optimized. Improved systems and methods for presenting content in the see-through display are required to improve the user experience.

Summary

[0003] Aspects of the present disclosure relate to methods and systems for providing audio systems in head-worn computer systems.

[0004] Aspects of the present disclosure relate to user interface methods and systems for head-worn computer systems.

[0005] These and other systems, methods, objects, features, and advantages of the present disclosure will be apparent to those skilled in the art from the following detailed description of the preferred embodiment and the drawings. All documents mentioned herein are hereby incorporated in their entirety by reference. Brief Description of the Drawings

[0006] Embodiments are described with reference to the following Figures. The same numbers may be used throughout to reference like features and components that are shown in the Figures:

[0007] Figure 1 illustrates a head worn computing system in accordance with the principles of the present disclosure.

[0008] Figure 2 illustrates a head worn computing system with optical system in accordance with the principles of the present disclosure.

[0009] Figure 3a illustrates a large prior art optical arrangement.

[00010] Figure 3b illustrates an upper optical module in accordance with the principles of the present disclosure.

[00011] Figure 4 illustrates an upper optical module in accordance with the principles of the present disclosure.

[00012] Figure 4a illustrates an upper optical module in accordance with the principles of the present disclosure.

[00013] Figure 4b illustrates an upper optical module in accordance with the principles of the present disclosure.

[00014] Figure 5 illustrates an upper optical module in accordance with the principles of the present disclosure.

[00015] Figure 5a illustrates an upper optical module in accordance with the principles of the present disclosure.

[00016] Figure 5b illustrates an upper optical module and dark light trap

according to the principles of the present disclosure.

[00017] Figure 5c illustrates an upper optical module and dark light trap

according to the principles of the present disclosure.

[00018] Figure 5d illustrates an upper optical module and dark light trap

according to the principles of the present disclosure.

[00019] Figure 5e illustrates an upper optical module and dark light trap

according to the principles of the present disclosure.

[00020] Figure 6 illustrates upper and lower optical modules in accordance with the principles of the present disclosure.

[00021] Figure 7 illustrates angles of combiner elements in accordance with the principles of the present disclosure. [00022] Figure 8 illustrates upper and lower optical modules in accordance with the principles of the present disclosure.

[00023] Figure 8a illustrates upper and lower optical modules in accordance with the principles of the present disclosure.

[00024] Figure 8b illustrates upper and lower optical modules in accordance with the principles of the present disclosure.

[00025] Figure 8c illustrates upper and lower optical modules in accordance with the principles of the present disclosure.

[00026] Figure 9 illustrates an eye imaging system in accordance with the principles of the present disclosure.

[00027] Figure 10 illustrates a light source in accordance with the principles of the present disclosure.

[00028] Figure 10a illustrates a back lighting system in accordance with the principles of the present disclosure.

[00029] Figure 10b illustrates a back lighting system in accordance with the principles of the present disclosure.

[00030] Figures 11a to l id illustrate light source and filters in accordance with the principles of the present disclosure.

[00031] Figures 12a to 12c illustrate light source and quantum dot systems in accordance with the principles of the present disclosure.

[00032] Figures 13a to 13c illustrate peripheral lighting systems in accordance with the principles of the present disclosure.

[00033] Figures 14a to 14h illustrate a light suppression systems in accordance with the principles of the present disclosure.

[00034] Figure 14i illustrates a head-worn computer with scene cameras in accordance with the principles of the present disclosure.

[00035] Figures 14j and 14j2 illustrates a head-worn computer with an internal speaker in accordance with the principles of the present disclosure.

[00036] Figure 14k illustrates a configuration for an internal speaker in accordance with the principles of the present disclosure.

[00037] Figure 14ka - 14ki illustrate audio systems in accordance with the principles of the present disclosure.

[00038] Figure 141 illustrates a removable electrochromic lens in accordance with the principles of the present disclosure. [00039] Figure 14m illustrates electrochromic lens connection systems in accordance with the principles of the present disclosure.

[00040] Figure 15 illustrates an external user interface in accordance with the principles of the present disclosure.

[00041] Figure 16a to 16c illustrate distance control systems in accordance with the principles of the present disclosure.

[00042] Figure 17a to 17c illustrate force interpretation systems in accordance with the principles of the present disclosure.

[00043] Figure 18a to 18c illustrate user interface mode selection systems in accordance with the principles of the present disclosure.

[00044] Figure 19 illustrates interaction systems in accordance with the principles of the present disclosure.

[00045] Figure 20 illustrates external user interfaces in accordance with the principles of the present disclosure.

[00046] Figure 21 illustrates mD trace representations presented in accordance with the principles of the present disclosure.

[00047] Figure 22 illustrates mD trace representations presented in accordance with the principles of the present disclosure.

[00048] Figure 23 illustrates an mD scanned environment in accordance with the principles of the present disclosure.

[00049] Figure 23 a illustrates mD trace representations presented in accordance with the principles of the present disclosure.

[00050] Figure 24 illustrates a stray light suppression technology in accordance with the principles of the present disclosure.

[00051] Figure 25 illustrates a stray light suppression technology in accordance with the principles of the present disclosure.

[00052] Figure 26 illustrates a stray light suppression technology in accordance with the principles of the present disclosure.

[00053] Figure 27 illustrates a stray light suppression technology in accordance with the principles of the present disclosure.

[00054] Figures 28a to 28c illustrate DLP mirror angles.

[00055] Figures 29, 30, 31, 32, 32a, and 33 illustrate eye imaging systems according to the principles of the present disclosure. [00056] Figures 34 and 34a illustrate structured eye lighting systems according to the principles of the present disclosure.

[00057] Figure 35 illustrates eye glint in the prediction of eye direction analysis in accordance with the principles of the present disclosure.

[00058] Figure 36a illustrates eye characteristics that may be used in personal identification through analysis of a system according to the principles of the present disclosure.

[00059] Figure 36b illustrates a digital content presentation reflection off of the wearer' s eye that may be analyzed in accordance with the principles of the present disclosure.

[00060] Figure 37 illustrates eye imaging along various virtual target lines and various focal planes in accordance with the principles of the present disclosure.

[00061] Figure 38 illustrates content control with respect to eye movement based on eye imaging in accordance with the principles of the present disclosure.

[00062] Figure 39 illustrates eye imaging and eye convergence in accordance with the principles of the present disclosure.

[00063] Figure 40 illustrates content position dependent on sensor feedback in accordance with the principles of the present disclosure.

[00064] Figure 41 illustrates content position dependent on sensor feedback in accordance with the principles of the present disclosure.

[00065] Figure 42 illustrates content position dependent on sensor feedback in accordance with the principles of the present disclosure.

[00066] Figure 43 illustrates content position dependent on sensor feedback in accordance with the principles of the present disclosure.

[00067] Figure 44 illustrates content position dependent on sensor feedback in accordance with the principles of the present disclosure.

[00068] Figure 45 illustrates various headings over time in an example.

[00069] Figure 46 illustrates content position dependent on sensor feedback in accordance with the principles of the present disclosure.

[00070] Figure 47 illustrates content position dependent on sensor feedback in accordance with the principles of the present disclosure.

[00071] Figure 48 illustrates content position dependent on sensor feedback in accordance with the principles of the present disclosure.

[00072] Figure 49 illustrates content position dependent on sensor feedback in accordance with the principles of the present disclosure. [00073] Figure 50 illustrates light impinging an eye in accordance with the principles of the present disclosure.

[00074] Figure 51 illustrates a view of an eye in accordance with the

principles of the present disclosure.

[00075] Figure 52a and 52b illustrate views of an eye with a structured light pattern in accordance with the principles of the present disclosure.

[00076] Figure 53 illustrates an optics module in accordance with the

principles of the present disclosure.

[00077] Figure 54 illustrates an optics module in accordance with the

principles of the present disclosure.

[00078] Figure 55 shows a series of example spectrum for a variety of

controlled substances as measured using a form of infrared spectroscopy.

[00079] Figure 56 shows an infrared absorbance spectrum for glucose.

[00080] Figures 57 and 58 illustrate user interface systems in accordance with the principles of the present invention.

[00081] Figure 59 illustrates a 'frame tap' user interface in accordance with the principles of the present invention.

[00082] Figure 60 illustrates strain gauge user interfaces on head- worn

computers and an external user interface in accordance with the principles of the

present invention.

[00083] Figures 61, 62, and 63 illustrate modular expansion modules in

accordance with the principles of the present disclosure.

[00084] Figures 64, 65, 66, 67, 68, 69, 70, 71, and 72 illustrate eye-imaging systems in accordance with the principles of the present disclosure.

[00085] Figure 73 illustrates a user interface in accordance with the principles of the present disclosure.

[00086] Figure 74 illustrates a user interface in accordance with the principles of the present disclosure.

[00087] Figures 75 and 76 illustrate haptic systems in accordance with the principles of the present disclosure.

[00088] Figures 77, 78, 79a, 79b, 89, 81, 82, 83, 84, 84a, 85, 86, 87 and 88 illustrate solid see-through optical systems in accordance with the principles of the present disclosure.

[00089] Figure 89 illustrates a corrective optic and a see-through optical systems in accordance with the principles of the present disclosure. [00090] Figure 90 illustrates LED emission spectra for a display system in accordance with the principles of the present disclosure.

[00091] Figures 91, 92, 93 and 94 illustrate performance provided by a various notch mirrors in accordance with the principles of the present disclosure.

[00092] Figures 94a and 94b show how angles of incidence (A OI) and cone half angle (CFA) cause the performance of a bandpass filter to change.

[00093] Figures 95, 95a, 95b and 96 illustrate various optical systems in accordance with the principles of the present disclosure.

[00094] Figure 97 is an illustration of a cross section of an emissive image source such as an OLED.

[00095] Figure 98 shows a color filter layout wherein the colors repeat in rows and the rows are offset from one another by one subpixel.

[00096] Figure 99 shows a color filter layout wherein the colors repeat in rows.

[00097] Figure 100 shows a color filter layout wherein the colors repeat in rows and each row is offset from neighboring rows by 1 ½ subpixels.

[00098] Figure 101 shows an illustration of rays of image light as emitted by a single subpixel in a pixel.

[00099] Figure 102 is an illustration of how the ray angles of the image light sampled by a lens in forming an image for display in a typical compact head-worn computer vary across an image source.

[000100] Figure 102a shows an illustration of a compact optical system with a folded optical path wherein light rays are shown passing through the optics from the emissive image source to the eyebox where the user can view the image.

[000101] Figure 102b shows a thin lens layout with a relatively long focal length and a relatively narrow field of view.

[000102] Figure 102c shows a thin lens layout with a reduced length and a wider field of view.

[000103] Figure 103 is an illustration of the chief ray angles sampled by the lens over the surface of the image source.

[000104] Figure 104 is an illustration of a cross section of a portion of an image source wherein Pixel 1 is a center pixel and Pixel 5 is an edge pixel.

[000105] Figure 105 shows a modified color filter array wherein the color filter array is somewhat larger than the array of subpixels.

[000106] Figure 106 shows the effect of the progressively offset color filter array. [000107] Figure 107 shows an illustration of an optical solution wherein the rays from each subpixel are repointed so that zero angle rays become rays with the chief ray angle matched to the sampling of the lens.

[000108] Figure 108 shows an illustration of an array of subpixels on an image source, where is the center point of the image source is a subpixel in the array of subpixels.

[000109] Figure 109 illustrates a user looking at a phone.

[000110] Figures 110 and 111 illustrate a user wearing a HWC.

[000111] Figure 112 illustrates a see-through head-worn display with displayed content.

[000112] Figure 113 illustrates a user wearing a HWC.

[000113] Figure 114 illustrates a see-through head-worn display with displayed content.

[000114] Figure 115 illustrates a system for synchronizing the presentation of content among several HWCs.

[000115] While the disclosure has been described in connection with certain preferred embodiments, other embodiments would be understood by one of ordinary skill in the art and are encompassed herein.

Detailed Description of the Preferred Embodiment(s)

[000116] Aspects of the present disclosure relate to head- worn computing ("HWC") systems. HWC involves, in some instances, a system that mimics the appearance of head- worn glasses or sunglasses. The glasses may be a fully developed computing platform, such as including computer displays presented in each of the lenses of the glasses to the eyes of the user. In embodiments, the lenses and displays may be configured to allow a person wearing the glasses to see the environment through the lenses while also seeing, simultaneously, digital imagery, which forms an overlaid image that is perceived by the person as a digitally augmented image of the environment, or augmented reality ("AR").

[000117] HWC involves more than just placing a computing system on a person's head. The system may need to be designed as a lightweight, compact and fully functional computer display, such as wherein the computer display includes a high resolution digital display that provides a high level of emersion comprised of the displayed digital content and the see-through view of the environmental surroundings. User interfaces and control systems suited to the HWC device may be required that are unlike those used for a more conventional computer such as a laptop. For the HWC and associated systems to be most effective, the glasses may be equipped with sensors to determine environmental conditions, geographic location, relative positioning to other points of interest, objects identified by imaging and movement by the user or other users in a connected group, and the like. The HWC may then change the mode of operation to match the conditions, location, positioning, movements, and the like, in a method generally referred to as a contextually aware HWC. The glasses also may need to be connected, wirelessly or otherwise, to other systems either locally or through a network. Controlling the glasses may be achieved through the use of an external device, automatically through contextually gathered information, through user gestures captured by the glasses sensors, and the like. Each technique may be further refined depending on the software application being used in the glasses. The glasses may further be used to control or coordinate with external devices that are associated with the glasses.

[000118] Referring to Fig. 1, an overview of the HWC system 100 is presented. As shown, the HWC system 100 comprises a HWC 102, which in this instance is configured as glasses to be worn on the head with sensors such that the HWC 102 is aware of the objects and conditions in the environment 114. In this instance, the HWC 102 also receives and interprets control inputs such as gestures and movements 116. The HWC 102 may communicate with external user interfaces 104. The external user interfaces 104 may provide a physical user interface to take control instructions from a user of the HWC 102 and the external user interfaces 104 and the HWC 102 may communicate bi-directionally to affect the user's command and provide feedback to the external device 108. The HWC 102 may also communicate bi-directionally with externally controlled or coordinated local devices 108. For example, an external user interface 104 may be used in connection with the HWC 102 to control an externally controlled or coordinated local device 108. The externally controlled or coordinated local device 108 may provide feedback to the HWC 102 and a customized GUI may be presented in the HWC 102 based on the type of device or specifically identified device 108. The HWC 102 may also interact with remote devices and information sources 112 through a network connection 110. Again, the external user interface 104 may be used in connection with the HWC 102 to control or otherwise interact with any of the remote devices 108 and information sources 112 in a similar way as when the external user interfaces 104 are used to control or otherwise interact with the externally controlled or coordinated local devices 108. Similarly, HWC 102 may interpret gestures 116 (e.g captured from forward, downward, upward, rearward facing sensors such as camera(s), range finders, IR sensors, etc.) or environmental conditions sensed in the environment 114 to control either local or remote devices 108 or 112. [000119] We will now describe each of the main elements depicted on Fig. 1 in more detail; however, these descriptions are intended to provide general guidance and should not be construed as limiting. Additional description of each element may also be further described herein.

[000120] The HWC 102 is a computing platform intended to be worn on a person's head. The HWC 102 may take many different forms to fit many different functional requirements. In some situations, the HWC 102 will be designed in the form of conventional glasses. The glasses may or may not have active computer graphics displays. In situations where the HWC 102 has integrated computer displays the displays may be configured as see- through displays such that the digital imagery can be overlaid with respect to the user' s view of the environment 114. There are a number of see-through optical designs that may be used, including ones that have a reflective display (e.g. LCoS, DLP), emissive displays (e.g.

OLED, LED), hologram, TIR waveguides, and the like. In embodiments, lighting systems used in connection with the display optics may be solid state lighting systems, such as LED, OLED, quantum dot, quantum dot LED, etc. In addition, the optical configuration may be monocular or binocular. It may also include vision corrective optical components. In embodiments, the optics may be packaged as contact lenses. In other embodiments, the HWC 102 may be in the form of a helmet with a see-through shield, sunglasses, safety glasses, goggles, a mask, fire helmet with see-through shield, police helmet with see through shield, military helmet with see-through shield, utility form customized to a certain work task (e.g. inventory control, logistics, repair, maintenance, etc.), and the like.

[000121] The HWC 102 may also have a number of integrated computing facilities, such as an integrated processor, integrated power management, communication structures (e.g. cell net, WiFi, Bluetooth, local area connections, mesh connections, remote connections (e.g. client server, etc.)), and the like. The HWC 102 may also have a number of positional awareness sensors, such as GPS, electronic compass, altimeter, tilt sensor, IMU, and the like. It may also have other sensors such as a camera, rangefinder, hyper-spectral camera, Geiger counter, microphone, spectral illumination detector, temperature sensor, chemical sensor, biologic sensor, moisture sensor, ultrasonic sensor, and the like.

[000122] The HWC 102 may also have integrated control technologies. The integrated control technologies may be contextual based control, passive control, active control, user control, and the like. For example, the HWC 102 may have an integrated sensor (e.g. camera) that captures user hand or body gestures 116 such that the integrated processing system can interpret the gestures and generate control commands for the HWC 102. In another example, the HWC 102 may have sensors that detect movement (e.g. a nod, head shake, and the like) including accelerometers, gyros and other inertial measurements, where the integrated processor may interpret the movement and generate a control command in response. The HWC 102 may also automatically control itself based on measured or perceived environmental conditions. For example, if it is bright in the environment the HWC 102 may increase the brightness or contrast of the displayed image. In embodiments, the integrated control technologies may be mounted on the HWC 102 such that a user can interact with it directly. For example, the HWC 102 may have a button(s), touch capacitive interface, and the like.

[000123] As described herein, the HWC 102 may be in communication with external user interfaces 104. The external user interfaces may come in many different forms. For example, a cell phone screen may be adapted to take user input for control of an aspect of the HWC 102. The external user interface may be a dedicated UI, such as a keyboard, touch surface, button(s), joy stick, and the like. In embodiments, the external controller may be integrated into another device such as a ring, watch, bike, car, and the like. In each case, the external user interface 104 may include sensors (e.g. IMU, accelerometers, compass, altimeter, and the like) to provide additional input for controlling the HWD 104.

[000124] As described herein, the HWC 102 may control or coordinate with other local devices 108. The external devices 108 may be an audio device, visual device, vehicle, cell phone, computer, and the like. For instance, the local external device 108 may be another HWC 102, where information may then be exchanged between the separate HWCs 108.

[000125] Similar to the way the HWC 102 may control or coordinate with local devices 106, the HWC 102 may control or coordinate with remote devices 112, such as the HWC 102 communicating with the remote devices 112 through a network 110. Again, the form of the remote device 112 may have many forms. Included in these forms is another HWC 102. For example, each HWC 102 may communicate its GPS position such that all the HWCs 102 know where all of HWC 102 are located.

[000126] Figure 2 illustrates a HWC 102 with an optical system that includes an upper optical module 202 and a lower optical module 204. While the upper and lower optical modules 202 and 204 will generally be described as separate modules, it should be understood that this is illustrative only and the present disclosure includes other physical configurations, such as that when the two modules are combined into a single module or where the elements making up the two modules are configured into more than two modules. In embodiments, the upper module 202 includes a computer controlled display (e.g. LCoS, DLP, OLED, etc.) and image light delivery optics. In embodiments, the lower module includes eye delivery optics that are configured to receive the upper module's image light and deliver the image light to the eye of a wearer of the HWC. In Figure 2, it should be noted that while the upper and lower optical modules 202 and 204 are illustrated in one side of the HWC such that image light can be delivered to one eye of the wearer, that it is envisioned by the present disclosure that embodiments will contain two image light delivery systems, one for each eye.

[000127] Figure 3b illustrates an upper optical module 202 in accordance with the principles of the present disclosure. In this embodiment, the upper optical module 202 includes a DLP (also known as DMD or digital micromirror device) computer operated display 304 which includes pixels comprised of rotatable mirrors (such as, for example, the DLP3000 available from Texas Instruments), polarized light source 302, ¼ wave retarder film 308, reflective polarizer 310 and a field lens 312. The polarized light source 302 provides substantially uniform polarized light that is generally directed towards the reflective polarizer 310. The reflective polarizer reflects light of one polarization state (e.g. S polarized light) and transmits light of the other polarization state (e.g. P polarized light). The polarized light source 302 and the reflective polarizer 310 are oriented so that the polarized light from the polarized light source 302 is reflected generally towards the DLP 304. The light then passes through the ¼ wave film 308 once before illuminating the pixels of the DLP 304 and then again after being reflected by the pixels of the DLP 304. In passing through the ¼ wave film 308 twice, the light is converted from one polarization state to the other polarization state (e.g. the light is converted from S to P polarized light). The light then passes through the reflective polarizer 310. In the event that the DLP pixel(s) are in the "on" state (i.e. the mirrors are positioned to reflect light towards the field lens 312, the "on" pixels reflect the light generally along the optical axis and into the field lens 312. This light that is reflected by "on" pixels and which is directed generally along the optical axis of the field lens 312 will be referred to as image light 316. The image light 316 then passes through the field lens to be used by a lower optical module 204.

[000128] The light that is provided by the polarized light source 302, which is subsequently reflected by the reflective polarizer 310 before it reflects from the DLP 304, will generally be referred to as illumination light. The light that is reflected by the "off pixels of the DLP 304 is reflected at a different angle than the light reflected by the On" pixels, so that the light from the "off pixels is generally directed away from the optical axis of the field lens 312 and toward the side of the upper optical module 202 as shown in FIG. 3.. The light that is reflected by the "off pixels of the DLP 304 will be referred to as dark state light 314.

[000129] The DLP 304 operates as a computer controlled display and is generally thought of as a MEMs device. The DLP pixels are comprised of small mirrors that can be directed. The mirrors generally flip from one angle to another angle. The two angles are generally referred to as states. When light is used to illuminate the DLP the mirrors will reflect the light in a direction depending on the state. In embodiments herein, we generally refer to the two states as "on" and "off," which is intended to depict the condition of a display pixel. "On" pixels will be seen by a viewer of the display as emitting light because the light is directed along the optical axis and into the field lens and the associated remainder of the display system. "Off pixels will be seen by a viewer of the display as not emitting light because the light from these pixels is directed to the side of the optical housing and into a light trap or light dump where the light is absorbed. The pattern of "on" and "off pixels produces image light that is perceived by a viewer of the display as a computer generated image. Full color images can be presented to a user by sequentially providing illumination light with complimentary colors such as red, green and blue. Where the sequence is presented in a recurring cycle that is faster than the user can perceive as separate images and as a result the user perceives a full color image comprised of the sum of the sequential images. Bright pixels in the image are provided by pixels that remain in the "on" state for the entire time of the cycle, while dimmer pixels in the image are provided by pixels that switch between the "on" state and "off state within the time of the cycle, or frame time when in a video sequence of images.

[000130] FIG. 3a shows an illustration of a system for a DLP 304 in which the unpolarized light source 350 is pointed directly at the DLP 304. In this case, the angle required for the illumination light is such that the field lens 352 must be positioned substantially distant from the DLP 304 to avoid the illumination light from being clipped by the field lens 352. The large distance between the field lens 352 and the DLP 304 along with the straight path of the dark state light 354, means that the light trap for the dark state light 354 is also located at a substantial distance from the DLP. For these reasons, this configuration is larger in size compared to the upper optics module 202 of the preferred embodiments.

[000131] The configuration illustrated in Figure 3b can be lightweight and compact such that it fits into a small portion of a HWC. For example, the upper modules 202 illustrated herein can be physically adapted to mount in an upper frame of a HWC such that the image light can be directed into a lower optical module 204 for presentation of digital content to a wearer's eye. The package of components that combine to generate the image light (i.e. the polarized light source 302, DLP 304, reflective polarizer 310 andl/4 wave film 308) is very light and is compact. The height of the system, excluding the field lens, may be less than 8 mm. The width (i.e. from front to back) may be less than 8 mm. The weight may be less than 2 grams. The compactness of this upper optical module 202 allows for a compact mechanical design of the HWC and the light weight nature of these embodiments help make the HWC lightweight to provide for a HWC that is comfortable for a wearer of the HWC.

[000132] The configuration illustrated in Figure 3b can produce sharp contrast, high brightness and deep blacks, especially when compared to LCD or LCoS displays used in HWC. The "on" and "off states of the DLP provide for a strong differentiator in the light reflection path representing an "on" pixel and an "off pixel. As will be discussed in more detail below, the dark state light from the "off pixel reflections can be managed to reduce stray light in the display system to produce images with high contrast.

[000133] Figure 4 illustrates another embodiment of an upper optical module 202 in accordance with the principles of the present disclosure. This embodiment includes a light source 404, but in this case, the light source can provide unpolarized illumination light. The illumination light from the light source 404 is directed into a TIR wedge 418 such that the illumination light is incident on an internal surface of the TIR wedge 418 (shown as the angled lower surface of the TRI wedge 418 in FIG. 4) at an angle that is beyond the critical angle as defined by Eqn 1.

[000134] Critical angle = arc-sin(l/n) Eqn 1

[000135] Where the critical angle is the angle beyond which the illumination light is reflected from the internal surface when the internal surface comprises an interface from a solid with a higher refractive index (n) to air with a refractive index of 1 (e.g. for an interface of acrylic, with a refractive index of n = 1.5, to air, the critical angle is 41.8 degrees; for an interface of polycarbonate, with a refractive index of n = 1.59, to air the critical angle is 38.9 degrees). Consequently, the TIR wedge 418 is associated with a thin air gap 408 along the internal surface to create an interface between a solid with a higher refractive index and air. By choosing the angle of the light source 404 relative to the DLP 402 in correspondence to the angle of the internal surface of the TIR wedge 418, illumination light is turned toward the DLP 402 at an angle suitable for providing image light 414 as reflected from "on"pixels. Wherein, the illumination light is provided to the DLP 402 at approximately twice the angle of the pixel mirrors in the DLP 402 that are in the "on" state, such that after reflecting from the pixel mirrors, the image light 414 is directed generally along the optical axis of the field lens. Depending on the state of the DLP pixels, the illumination light from "on" pixels may be reflected as image light 414 which is directed towards a field lens and a lower optical module 204, while illumination light reflected from "off pixels (generally referred to herein as "dark" state light, "off pixel light or "off state light) 410 is directed in a separate direction, which may be trapped and not used for the image that is ultimately presented to the wearer' s eye.

[000136] The light trap for the dark state light 410 may be located along the optical axis defined by the direction of the dark state light 410 and in the side of the housing, with the function of absorbing the dark state light. To this end, the light trap may be comprised of an area outside of the cone of image light 414 from the "on" pixels. The light trap is typically made up of materials that absorb light including coatings of black paints or other light absorbing materials to prevent light scattering from the dark state light degrading the image perceived by the user. In addition, the light trap may be recessed into the wall of the housing or include masks or guards to block scattered light and prevent the light trap from being viewed adjacent to the displayed image.

[000137] The embodiment of Figure 4 also includes a corrective wedge 420 to correct the effect of refraction of the image light 414 as it exits the TIR wedge 418. By including the corrective wedge 420 and providing a thin air gap 408 (e.g. 25 micron), the image light from the "on" pixels can be maintained generally in a direction along the optical axis of the field lens (i.e. the same direction as that defined by the image light 414) so it passes into the field lens and the lower optical module 204. As shown in FIG. 4, the image light 414 from the "on" pixels exits the corrective wedge 420 generally perpendicular to the surface of the corrective wedge 420 while the dark state light exits at an oblique angle. As a result, the direction of the image light 414 from the "on" pixels is largely unaffected by refraction as it exits from the surface of the corrective wedge 420. In contrast, the dark state light 410 is substantially changed in direction by refraction when the dark state light 410 exits the corrective wedge 420.

[000138] The embodiment illustrated in Figure 4 has the similar advantages of those discussed in connection with the embodiment of Figure 3b. The dimensions and weight of the upper module 202 depicted in Figure 4 may be approximately 8 X 8 mm with a weight of less than 3 grams. A difference in overall performance between the configuration illustrated in Figure 3b and the configuration illustrated in Figure 4 is that the embodiment of Figure 4 doesn't require the use of polarized light as supplied by the light source 404. This can be an advantage in some situations as will be discussed in more detail below (e.g. increased see- through transparency of the HWC optics from the user's perspective). Polarized light may be used in connection with the embodiment depicted in Figure 4, in embodiments. An additional advantage of the embodiment of Figure 4 compared to the embodiment shown in Figure 3b is that the dark state light (shown as DLP off light 410) is directed at a steeper angle away from the optical axis of the image light 414 due to the added refraction encountered when the dark state light 410 exits the corrective wedge 420. This steeper angle of the dark state light 410 allows for the light trap to be positioned closer to the DLP 402 so that the overall size of the upper module 202 can be reduced. The light trap can also be made larger since the light trap doesn't interfere with the field lens, thereby the efficiency of the light trap can be increased and as a result, stray light can be reduced and the contrast of the image perceived by the user can be increased. Figure 4a illustrates the embodiment described in connection with Figure 4 with an example set of corresponding angles at the various surfaces with the reflected angles of a ray of light passing through the upper optical module 202. In this example, the DLP mirrors are provided at 17 degrees to the surface of the DLP device. The angles of the TIR wedge are selected in correspondence to one another to provide TIR reflected illumination light at the correct angle for the DLP mirrors while allowing the image light and dark state light to pass through the thin air gap, various combinations of angles are possible to achieve this.

[000139] Figure 5 illustrates yet another embodiment of an upper optical module 202 in accordance with the principles of the present disclosure. As with the embodiment shown in Figure 4, the embodiment shown in Figure 5 does not require the use of polarized light. Polarized light may be used in connection with this embodiment, but it is not required. The optical module 202 depicted in Figure 5 is similar to that presented in connection with Figure 4; however, the embodiment of Figure 5 includes an off light redirection wedge 502. As can be seen from the illustration, the off light redirection wedge 502 allows the image light 414 to continue generally along the optical axis toward the field lens and into the lower optical module 204 (as illustrated). However, the off light 504 is redirected substantially toward the side of the corrective wedge 420 where it passes into the light trap. This configuration may allow further height compactness in the HWC because the light trap (not illustrated) that is intended to absorb the off light 504 can be positioned laterally adjacent the upper optical module 202 as opposed to below it. In the embodiment depicted in Figure 5 there is a thin air gap between the TIR wedge 418 and the corrective wedge 420 (similar to the embodiment of Figure 4). There is also a thin air gap between the corrective wedge 420 and the off light redirection wedge 502. There may be HWC mechanical configurations that warrant the positioning of a light trap for the dark state light elsewhere and the illustration depicted in Figure 5 should be considered illustrative of the concept that the off light can be redirected to create compactness of the overall HWC. Figure 5a illustrates an example of the embodiment described in connection with Figure 5 with the addition of more details on the relative angles at the various surfaces and a light ray trace for image light and a light ray trace for dark light are shown as it passes through the upper optical module 202. Again, various combinations of angles are possible.

[000140] Figure 4b shows an illustration of a further embodiment in which a solid transparent matched set of wedges 456 is provided with a reflective polarizer 450 at the interface between the wedges. Wherein the interface between the wedges in the wedge set 456 is provided at an angle so that illumination light 452 from the polarized light source 458 is reflected at the proper angle (e.g. 34 degrees for a 17 degree DLP mirror) for the DLP mirror "on" state so that the reflected image light 414 is provided along the optical axis of the field lens. The general geometry of the wedges in the wedge set 456 is similar to that shown in Figures 4 and 4a. A quarter wave film 454 is provided on the DLP 402 surface so that the illumination light 452 is one polarization state (e.g. S polarization state) while in passing through the quarter wave film 454, reflecting from the DLP mirror and passing back through the quarter wave film 454, the image light 414 is converted to the other polarization state (e.g. P polarization state). The reflective polarizer is oriented such that the illumination light 452 with its polarization state is reflected and the image light 414 with it's other polarization state is transmitted. Since the dark state light from the "off pixels 410 also passes through the quarter wave film 454 twice, it is also the other polarization state (e.g. P polarization state) so that it is transmitted by the reflective polarizer 450.

[000141] The angles of the faces of the wedge set 450 correspond to the needed angles to provide illumination light 452 at the angle needed by the DLP mirrors when in the "on" state so that the reflected image light 414 is reflected from the DLP along the optical axis of the field lens. The wedge set 456 provides an interior interface where a reflective polarizer film can be located to redirect the illumination light 452 toward the mirrors of the DLP 402. The wedge set also provides a matched wedge on the opposite side of the reflective polarizer 450 so that the image light 414 from the "on" pixels exits the wedge set 450 substantially perpendicular to the exit surface, while the dark state light from the 'off pixels 410 exits at an oblique angle to the exit surface. As a result, the image light 414 is substantially unrefracted upon exiting the wedge set 456, while the dark state light from the "off pixels 410 is substantially refracted upon exiting the wedge set 456 as shown in Figure 4b.

[000142] By providing a solid transparent matched wedge set, the flatness of the interface is reduced, because variations in the flatness have a negligible effect as long as they are within the cone angle of the illuminating light 452. Which can be f# 2.2 with a 26 degree cone angle. In a preferred embodiment, the reflective polarizer is bonded between the matched internal surfaces of the wedge set 456 using an optical adhesive so that Fresnel reflections at the interfaces on either side of the reflective polarizer 450 are reduced. The optical adhesive can be matched in refractive index to the material of the wedge set 456 and the pieces of the wedge set 456 can be all made from the same material such as BK7 glass or cast acrylic. Wherein the wedge material can be selected to have low birefringence as well to reduce non-uniformities in brightness. The wedge set 456 and the quarter wave film 454 can also be bonded to the DLP 402 to further reduce Fresnel reflections at the DLP interface losses. In addition, since the image light 414 is substantially normal to the exit surface of the wedge set 456, the flatness of the surface is not critical to maintain the wavefront of the image light 414 so that high image quality can be obtained in the displayed image without requiring very tightly toleranced flatness on the exit surface.

[000143] A yet further embodiment of the disclosure that is not illustrated, combines the embodiments illustrated in Figure 4b and Figure 5. In this embodiment, the wedge set 456 is comprised of three wedges with the general geometry of the wedges in the wedge set corresponding to that shown in Figures 5 and 5a. A reflective polarizer is bonded between the first and second wedges similar to that shown in Figure 4b, however, a third wedge is provided similar to the embodiment of Figure 5. Wherein there is an angled thin air gap between the second and third wedges so that the dark state light is reflected by TIR toward the side of the second wedge where it is absorbed in a light trap. This embodiment, like the embodiment shown in Figure 4b, uses a polarized light source as has been previously described. The difference in this embodiment is that the image light is transmitted through the reflective polarizer and is transmitted through the angled thin air gap so that it exits normal to the exit surface of the third wedge.

[000144] Figure 5b illustrates an upper optical module 202 with a dark light trap 514a. As described in connection with Figures 4 and 4a, image light can be generated from a DLP when using a TIR and corrective lens configuration. The upper module may be mounted in a HWC housing 510 and the housing 510 may include a dark light trap 514a. The dark light trap 514a is generally positioned/constructed/formed in a position that is optically aligned with the dark light optical axis 512. As illustrated, the dark light trap may have depth such that the trap internally reflects dark light in an attempt to further absorb the light and prevent the dark light from combining with the image light that passes through the field lens. The dark light trap may be of a shape and depth such that it absorbs the dark light. In addition, the dark light trap 514b, in embodiments, may be made of light absorbing materials or coated with light absorbing materials. In embodiments, the recessed light trap 514a may include baffles to block a view of the dark state light. This may be combined with black surfaces and textured or fibrous surfaces to help absorb the light. The baffles can be part of the light trap, associated with the housing, or field lens, etc.

[000145] Figure 5c illustrates another embodiment with a light trap 514b. As can be seen in the illustration, the shape of the trap is configured to enhance internal reflections within the light trap 514b to increase the absorption of the dark light 512. Figure 5d illustrates another embodiment with a light trap 514c. As can be seen in the illustration, the shape of the trap 514c is configured to enhance internal reflections to increase the absorption of the dark light 512.

[000146] Figure 5e illustrates another embodiment of an upper optical module 202 with a dark light trap 514d. This embodiment of upper module 202 includes an off light reflection wedge 502, as illustrated and described in connection with the embodiment of Figures 5 and 5a. As can be seen in Figure 5e, the light trap 514d is positioned along the optical path of the dark light 512. The dark light trap 514d may be configured as described in other embodiments herein. The embodiment of the light trap 514d illustrated in Figure 5e includes a black area on the side wall of the wedge, wherein the side wall is located substantially away from the optical axis of the image light 414. In addition, baffles 5252 may be added to one or more edges of the field lens 312 to block the view of the light trap 514d adjacent to the displayed image seen by the user.

[000147] Figure 6 illustrates a combination of an upper optical module 202 with a lower optical module 204. In this embodiment, the image light projected from the upper optical module 202 may or may not be polarized. The image light is reflected off a flat combiner element 602 such that it is directed towards the user' s eye. Wherein, the combiner element 602 is a partial mirror that reflects image light while transmitting a substantial portion of light from the environment so the user can look through the combiner element and see the environment surrounding the HWC.

[000148] The combiner 602 may include a holographic pattern, to form a holographic mirror. If a monochrome image is desired, there may be a single wavelength reflection design for the holographic pattern on the surface of the combiner 602. If the intention is to have multiple colors reflected from the surface of the combiner 602, a multiple wavelength holographic mirror maybe included on the combiner surface. For example, in a three-color embodiment, where red, green and blue pixels are generated in the image light, the holographic mirror may be reflective to wavelengths substantially matching the wavelengths of the red, green and blue light provided by the light source. This configuration can be used as a wavelength specific mirror where pre-determined wavelengths of light from the image light are reflected to the user's eye. This configuration may also be made such that substantially all other wavelengths in the visible pass through the combiner element 602 so the user has a substantially clear view of the surroundings when looking through the combiner element 602. The transparency between the user' s eye and the surrounding may be approximately 80% when using a combiner that is a holographic mirror. Wherein holographic mirrors can be made using lasers to produce interference patterns in the holographic material of the combiner where the wavelengths of the lasers correspond to the wavelengths of light that are subsequently reflected by the holographic mirror.

[000149] In another embodiment, the combiner element 602 may include a notch mirror comprised of a multilayer coated substrate wherein the coating is designed to substantially reflect the wavelengths of light provided by the light source and substantially transmit the remaining wavelengths in the visible spectrum. For example, in the case where red, green and blue light is provided by the light source to enable full color images to be provided to the user, the notch mirror is a tristimulus notch mirror wherein the multilayer coating is designed to reflect narrow bands of red, green and blue light that are matched to the what is provided by the light source and the remaining visible wavelengths are transmitted through the coating to enable a view of the environment through the combiner. In another example where monochrome images are provided to the user, the notch mirror is designed to reflect a single narrow band of light that is matched to the wavelength range of the light provided by the light source while transmitting the remaining visible wavelengths to enable a see-thru view of the environment. The combiner 602 with the notch mirror would operate, from the user's perspective, in a manner similar to the combiner that includes a holographic pattern on the combiner element 602. The combiner, with the tristimulus notch mirror, would reflect the "on" pixels to the eye because of the match between the reflective wavelengths of the notch mirror and the color of the image light, and the wearer would be able to see with high clarity the surroundings. The transparency between the user's eye and the surrounding may be approximately 80% when using the tristimulus notch mirror. In addition, the image provided by the upper optical module 202 with the notch mirror combiner can provide higher contrast images than the holographic mirror combiner due to less scattering of the imaging light by the combiner.

[000150] Light can escape through the combiner 602 and may produce face glow as the light is generally directed downward onto the cheek of the user. When using a holographic mirror combiner or a tristimulus notch mirror combiner, the escaping light can be trapped to avoid face glow. In embodiments, if the image light is polarized before the combiner, a linear polarizer can be laminated, or otherwise associated, to the combiner, with the transmission axis of the polarizer oriented relative to the polarized image light so that any escaping image light is absorbed by the polarizer. In embodiments, the image light would be polarized to provide S polarized light to the combiner for better reflection. As a result, the linear polarizer on the combiner would be oriented to absorb S polarized light and pass P polarized light. This provides the preferred orientation of polarized sunglasses as well.

[000151] If the image light is unpolarized, a microlouvered film such as a privacy filter can be used to absorb the escaping image light while providing the user with a see-thru view of the environment. In this case, the absorbance or transmittance of the microlouvered film is dependent on the angle of the light. Where steep angle light is absorbed and light at less of an angle is transmitted. For this reason, in an embodiment, the combiner with the microlouver film is angled at greater than 45 degrees to the optical axis of the image light (e.g. the combiner can be oriented at 50 degrees so the image light from the file lens is incident on the combiner at an oblique angle.

[000152] Figure 7 illustrates an embodiment of a combiner element 602 at various angles when the combiner element 602 includes a holographic mirror. Normally, a mirrored surface reflects light at an angle equal to the angle that the light is incident to the mirrored surface. Typically, this necessitates that the combiner element be at 45 degrees, 602a, if the light is presented vertically to the combiner so the light can be reflected horizontally towards the wearer' s eye. In embodiments, the incident light can be presented at angles other than vertical to enable the mirror surface to be oriented at other than 45 degrees, but in all cases wherein a mirrored surface is employed (including the tristimulus notch mirror described previously), the incident angle equals the reflected angle. As a result, increasing the angle of the combiner 602a requires that the incident image light be presented to the combiner 602a at a different angle which positions the upper optical module 202 to the left of the combiner as shown in Figure 7. In contrast, a holographic mirror combiner, included in embodiments, can be made such that light is reflected at a different angle from the angle that the light is incident onto the holographic mirrored surface. This allows freedom to select the angle of the combiner element 602b independent of the angle of the incident image light and the angle of the light reflected into the wearer's eye. In embodiments, the angle of the combiner element 602b is greater than 45 degrees (shown in Figure 7) as this allows a more laterally compact HWC design. The increased angle of the combiner element 602b decreases the front to back width of the lower optical module 204 and may allow for a thinner HWC display (i.e. the furthest element from the wearer's eye can be closer to the wearer's face).

[000153] Figure 8 illustrates another embodiment of a lower optical module 204. In this embodiment, polarized image light provided by the upper optical module 202, is directed into the lower optical module 204. The image light reflects off a polarized mirror 804 and is directed to a focusing partially reflective mirror 802, which is adapted to reflect the polarized light. An optical element such as a ¼ wave film located between the polarized mirror 804 and the partially reflective mirror 802, is used to change the polarization state of the image light such that the light reflected by the partially reflective mirror 802 is transmitted by the polarized mirror 804 to present image light to the eye of the wearer. The user can also see through the polarized mirror 804 and the partially reflective mirror 802 to see the surrounding environment. As a result, the user perceives a combined image comprised of the displayed image light overlaid onto the see-thru view of the environment.

[000154] While many of the embodiments of the present disclosure have been referred to as upper and lower modules containing certain optical components, it should be understood that the image light and dark light production and management functions described in connection with the upper module may be arranged to direct light in other directions (e.g. upward, sideward, etc.). In embodiments, it may be preferred to mount the upper module 202 above the wearer' s eye, in which case the image light would be directed downward. In other embodiments it may be preferred to produce light from the side of the wearer's eye, or from below the wearer's eye. In addition, the lower optical module is generally configured to deliver the image light to the wearer' s eye and allow the wearer to see through the lower optical module, which may be accomplished through a variety of optical components.

[000155] Figure 8a illustrates an embodiment of the present disclosure where the upper optical module 202 is arranged to direct image light into a TIR waveguide 810. In this embodiment, the upper optical module 202 is positioned above the wearer's eye 812 and the light is directed horizontally into the TIR waveguide 810. The TIR waveguide is designed to internally reflect the image light in a series of downward TIR reflections until it reaches the portion in front of the wearer's eye, where the light passes out of the TIR waveguide 812 into the wearer's eye. In this embodiment, an outer shield 814 is positioned in front of the TIR waveguide 810.

[000156] Figure 8b illustrates an embodiment of the present disclosure where the upper optical module 202 is arranged to direct image light into a TIR waveguide 818. In this embodiment, the upper optical module 202 is arranged on the side of the TIR waveguide 818. For example, the upper optical module may be positioned in the arm or near the arm of the HWC when configured as a pair of head worn glasses. The TIR waveguide 818 is designed to internally reflect the image light in a series of TIR reflections until it reaches the portion in front of the wearer's eye, where the light passes out of the TIR waveguide 812 into the wearer' s eye.

[000157] Figure 8c illustrates yet further embodiments of the present disclosure where an upper optical module 202 is directing polarized image light into an optical guide 828 where the image light passes through a polarized reflector 824, changes polarization state upon reflection of the optical element 822 which includes a ¼ wave film for example and then is reflected by the polarized reflector 824 towards the wearer's eye, due to the change in polarization of the image light. The upper optical module 202 may be positioned to direct light to a mirror 820, to position the upper optical module 202 laterally, in other

embodiments, the upper optical module 202 may direct the image light directly towards the polarized reflector 824. It should be understood that the present disclosure comprises other optical arrangements intended to direct image light into the wearer's eye.

[000158] Another aspect of the present disclosure relates to eye imaging. In embodiments, a camera is used in connection with an upper optical module 202 such that the wearer' s eye can be imaged using pixels in the "off state on the DLP. Figure 9 illustrates a system where the eye imaging camera 802 is mounted and angled such that the field of view of the eye imaging camera 802 is redirected toward the wearer's eye by the mirror pixels of the DLP 402 that are in the "off state. In this way, the eye imaging camera 802 can be used to image the wearer' s eye along the same optical axis as the displayed image that is presented to the wearer. Wherein, image light that is presented to the wearer's eye illuminates the wearer's eye so that the eye can be imaged by the eye imaging camera 802. In the process, the light reflected by the eye passes back though the optical train of the lower optical module 204 and a portion of the upper optical module to where the light is reflected by the "off pixels of the DLP 402 toward the eye imaging camera 802.

[000159] In embodiments, the eye imaging camera may image the wearer's eye at a moment in time where there are enough "off pixels to achieve the required eye image resolution. In another embodiment, the eye imaging camera collects eye image information from "off pixels over time and forms a time lapsed image. In another embodiment, a modified image is presented to the user wherein enough "off state pixels are included that the camera can obtain the desired resolution and brightness for imaging the wearer's eye and the eye image capture is synchronized with the presentation of the modified image.

[000160] The eye imaging system may be used for security systems. The HWC may not allow access to the HWC or other system if the eye is not recognized (e.g. through eye characteristics including retina or iris characteristics, etc.). The HWC may be used to provide constant security access in some embodiments. For example, the eye security confirmation may be a continuous, near-continuous, real-time, quasi real-time, periodic, etc. process so the wearer is effectively constantly being verified as known. In embodiments, the HWC may be worn and eye security tracked for access to other computer systems.

[000161] The eye imaging system may be used for control of the HWC. For example, a blink, wink, or particular eye movement may be used as a control mechanism for a software application operating on the HWC or associated device.

[000162] The eye imaging system may be used in a process that determines how or when the HWC 102 delivers digitally displayed content to the wearer. For example, the eye imaging system may determine that the user is looking in a direction and then HWC may change the resolution in an area of the display or provide some content that is associated with something in the environment that the user may be looking at. Alternatively, the eye imaging system may identify different user's and change the displayed content or enabled features provided to the user. User' s may be identified from a database of users eye characteristics either located on the HWC 102 or remotely located on the network 110 or on a server 112. In addition, the HWC may identify a primary user or a group of primary users from eye characteristics wherein the primary user(s) are provided with an enhanced set of features and all other user's are provided with a different set of features. Thus in this use case, the HWC 102 uses identified eye characteristics to either enable features or not and eye characteristics need only be analyzed in comparison to a relatively small database of individual eye characteristics. [000163] Figure 10 illustrates a light source that may be used in association with the upper optics module 202 (e.g. polarized light source if the light from the solid state light source is polarized such as polarized light source 302 and 458 ), and light source 404. In embodiments, to provide a uniform surface of light 1008 to be directed into the upper optical module 202 and towards the DLP of the upper optical module, either directly or indirectly, the solid state light source 1002 may be projected into a backlighting optical system 1004. The solid state light source 1002 may be one or more LEDs, laser diodes, OLEDs. In embodiments, the backlighting optical system 1004 includes an extended section with a length/distance ratio of greater than 3, wherein the light undergoes multiple reflections from the sidewalls to mix of homogenize the light as supplied by the solid state light source 1002. The backlighting optical system 1004 can also include structures on the surface opposite (on the left side as shown in Figure 10) to where the uniform light 1008 exits the backlight 1004 to change the direction of the light toward the DLP 302 and the reflective polarizer 310 or the DLP 402 and the TIR wedge 418. The backlighting optical system 1004 may also include structures to collimate the uniform light 1008 to provide light to the DLP with a smaller angular distribution or narrower cone angle. Diffusers or polarizers can be used on the entrance or exit surface of the backlighting optical system. Diffusers can be used to spread or uniformize the exiting light from the backlight to improve the uniformity or increase the angular spread of the uniform light 1008. Elliptical diffusers that diffuse the light more in some directions and less in others can be used to improve the uniformity or spread of the uniform light 1008 in directions orthogonal to the optical axis of the uniform light 1008. Linear polarizers can be used to convert unpolarized light as supplied by the solid state light source 1002 to polarized light so the uniform light 1008 is polarized with a desired polarization state. A reflective polarizer can be used on the exit surface of the backlight 1004 to polarize the uniform light 1008 to the desired polarization state, while reflecting the other polarization state back into the backlight where it is recycled by multiple reflections within the backlight 1004 and at the solid state light source 1002. Therefore, by including a reflective polarizer at the exit surface of the backlight 1004, the efficiency of the polarized light source is improved.

[000164] Figures 10a and 10b show illustrations of structures in backlight optical systems 1004 that can be used to change the direction of the light provided to the entrance face 1045 by the light source and then collimates the light in a direction lateral to the optical axis of the exiting uniform light 1008. Structure 1060 includes an angled sawtooth pattern in a transparent waveguide wherein the left edge of each sawtooth clips the steep angle rays of light thereby limiting the angle of the light being redirected. The steep surface at the right (as shown) of each sawtooth then redirects the light so that it reflects off the left angled surface of each sawtooth and is directed toward the exit surface 1040. The sawtooth surfaces shown on the lower surface in Figures 10a and 10b, can be smooth and coated (e.g. with an aluminum coating or a dielectric mirror coating) to provide a high level of reflectivity without scattering. Structure 1050 includes a curved face on the left side (as shown) to focus the rays after they pass through the exit surface 1040, thereby providing a mechanism for collimating the uniform light 1008. In a further embodiment, a diffuser can be provided between the solid state light source 1002 and the entrance face 1045 to homogenize the light provided by the solid state light source 1002. In yet a further embodiment, a polarizer can be used between the diffuser and the entrance face 1045 of the backlight 1004 to provide a polarized light source. Because the sawtooth pattern provides smooth reflective surfaces, the polarization state of the light can be preserved from the entrance face 1045 to the exit face 1040. In this embodiment, the light entering the backlight from the solid state light source 1002 passes through the polarizer so that it is polarized with the desired polarization state. If the polarizer is an absorptive linear polarizer, the light of the desired polarization state is transmitted while the light of the other polarization state is absorbed. If the polarizer is a reflective polarizer, the light of the desired polarization state is transmitted into the backlight 1004 while the light of the other polarization state is reflected back into the solid state light source 1002 where it can be recycled as previously described, to increase the efficiency of the polarized light source.

[000165] Figure 11a illustrates a light source 1100 that may be used in association with the upper optics module 202. In embodiments, the light source 1100 may provide light to a backlighting optical system 1004 as described above in connection with Figure 10. In embodiments, the light source 1100 includes a tristimulus notch filter 1102. The tristimulus notch filter 1102 has narrow band pass filters for three wavelengths, as indicated in Figure 11c in a transmission graph 1108. The graph shown in Figure lib, as 1104 illustrates an output of three different colored LEDs. One can see that the bandwidths of emission are narrow, but they have long tails. The tristimulus notch filter 1102 can be used in connection with such LEDs to provide a light source 1100 that emits narrow filtered wavelengths of light as shown in Figure 1 Id as the transmission graph 1110. Wherein the clipping effects of the tristimulus notch filter 1102 can be seen to have cut the tails from the LED emission graph 1104 to provide narrower wavelength bands of light to the upper optical module 202. The light source 1100 can be used in connection with a combiner 602 with a holographic mirror or tristimulus notch mirror to provide narrow bands of light that are reflected toward the wearer' s eye with less waste light that does not get reflected by the combiner, thereby improving efficiency and reducing escaping light that can cause faceglow.

[000166] Figure 12a illustrates another light source 1200 that may be used in association with the upper optics module 202. In embodiments, the light source 1200 may provide light to a backlighting optical system 1004 as described above in connection with Figure 10. In embodiments, the light source 1200 includes a quantum dot cover glass 1202. Where the quantum dots absorb light of a shorter wavelength and emit light of a longer wavelength (Figure 12b shows an example wherein a UV spectrum 1202 applied to a quantum dot results in the quantum dot emitting a narrow band shown as a PL spectrum 1204) that is dependent on the material makeup and size of the quantum dot. As a result, quantum dots in the quantum dot cover glass 1202 can be tailored to provide one or more bands of narrow bandwidth light (e.g. red, green and blue emissions dependent on the different quantum dots included as illustrated in the graph shown in Figure 12c where three different quantum dots are used. In embodiments, the LED driver light emits UV light, deep blue or blue light. For sequential illumination of different colors, multiple light sources 1200 would be used where each light source 1200 would include a quantum dot cover glass 1202 with a quantum dot selected to emit at one of the desired colors. The light source 1100 can be used in connection with a combiner 602 with a holographic mirror or tristimulus notch mirror to provide narrow transmission bands of light that are reflected toward the wearer' s eye with less waste light that does not get reflected.

[000167] Another aspect of the present disclosure relates to the generation of peripheral image lighting effects for a person wearing a HWC. In embodiments, a solid state lighting system (e.g. LED, OLED, etc), or other lighting system, may be included inside the optical elements of an lower optical module 204. The solid state lighting system may be arranged such that lighting effects outside of a field of view (FOV) of the presented digital content is presented to create an emersive effect for the person wearing the HWC. To this end, the lighting effects may be presented to any portion of the HWC that is visible to the wearer. The solid state lighting system may be digitally controlled by an integrated processor on the HWC. In embodiments, the integrated processor will control the lighting effects in coordination with digital content that is presented within the FOV of the HWC. For example, a movie, picture, game, or other content, may be displayed or playing within the FOV of the HWC. The content may show a bomb blast on the right side of the FOV and at the same moment, the solid state lighting system inside of the upper module optics may flash quickly in concert with the FOV image effect. The effect may not be fast, it may be more persistent to indicate, for example, a general glow or color on one side of the user. The solid state lighting system may be color controlled, with red, green and blue LEDs, for example, such that color control can be coordinated with the digitally presented content within the field of view.

[000168] Figure 13a illustrates optical components of a lower optical module 204 together with an outer lens 1302. Figure 13a also shows an embodiment including effects LED's 1308a and 1308b. Figure 13a illustrates image light 1312, as described herein elsewhere, directed into the upper optical module where it will reflect off of the combiner element 1304, as described herein elsewhere. The combiner element 1304 in this embodiment is angled towards the wearer's eye at the top of the module and away from the wearer' s eye at the bottom of the module, as also illustrated and described in connection with Figure 8 (e.g. at a 45 degree angle). The image light 1312 provided by an upper optical module 202 (not shown in Figure 13 a) reflects off of the combiner element 1304 towards the collimating mirror 1310, away from the wearer's eye, as described herein elsewhere. The image light 1312 then reflects and focuses off of the collimating mirror 1304, passes back through the combiner element 1304, and is directed into the wearer's eye. The wearer can also view the surrounding environment through the transparency of the combiner element 1304, collimating mirror 1310, and outer lens 1302 (if it is included). As described herein elsewhere, various surfaces are polarized to create the optical path for the image light and to provide transparency of the elements such that the wearer can view the surrounding environment. The wearer will generally perceive that the image light forms an image in the FOV 1305. In embodiments, the outer lens 1302 may be included. The outer lens 1302 is an outer lens that may or may not be corrective and it may be designed to conceal the lower optical module components in an effort to make the HWC appear to be in a form similar to standard glasses or sunglasses.

[000169] In the embodiment illustrated in Figure 13a, the effects LEDs 1308a and 1308b are positioned at the sides of the combiner element 1304 and the outer lens 1302 and/or the collimating mirror 1310. In embodiments, the effects LEDs 1308a are positioned within the confines defined by the combiner element 1304 and the outer lens 1302 and/or the collimating mirror. The effects LEDs 1308a and 1308b are also positioned outside of the FOV 1305. In this arrangement, the effects LEDs 1308a and 1308b can provide lighting effects within the lower optical module outside of the FOV 1305. In embodiments the light emitted from the effects LEDs 1308a and 1308b may be polarized such that the light passes through the combiner element 1304 toward the wearer's eye and does not pass through the outer lens 1302 and/or the collimating mirror 1310. This arrangement provides peripheral lighting effects to the wearer in a more private setting by not transmitting the lighting effects through the front of the HWC into the surrounding environment. However, in other embodiments, the effects LEDs 1308a and 1308b may be unpolarized so the lighting effects provided are made to be purposefully viewable by others in the environment for

entertainment such as giving the effect of the wearer' s eye glowing in correspondence to the image content being viewed by the wearer.

[000170] Figure 13b illustrates a cross section of the embodiment described in connection with Figure 13a. As illustrated, the effects LED 1308a is located in the upper- front area inside of the optical components of the lower optical module. It should be understood that the effects LED 1308a position in the described embodiments is only illustrative and alternate placements are encompassed by the present disclosure.

Additionallly, in embodiments, there may be one or more effects LEDs 1308a in each of the two sides of HWC to provide peripheral lighting effects near one or both eyes of the wearer.

[000171] Figure 13c illustrates an embodiment where the combiner element 1304 is angled away from the eye at the top and towards the eye at the bottom (e.g. in accordance with the holographic or notch filter embodiments described herein). In this embodiment, the effects LED 1308a is located on the outer lens 1302 side of the combiner element 1304 to provide a concealed appearance of the lighting effects. As with other embodiments, the effects LED 1308a of Figure 13c may include a polarizer such that the emitted light can pass through a polarized element associated with the combiner element 1304 and be blocked by a polarized element associated with the outer lens 1302.

[000172] Another aspect of the present disclosure relates to the mitigation of light escaping from the space between the wearer' s face and the HWC itself. Another aspect of the present disclosure relates to maintaining a controlled lighting environment in proximity to the wearer's eyes. In embodiments, both the maintenance of the lighting environment and the mitigation of light escape are accomplished by including a removable and replaceable flexible shield for the HWC. Wherein the removable and replaceable shield can be provided for one eye or both eyes in correspondence to the use of the displays for each eye. For example, in a night vision application, the display to only one eye could be used for night vision while the display to the other eye is turned off to provide good see-thru when moving between areas where visible light is available and dark areas where night vision enhancement is needed. [000173] Figure 14a illustrates a removable and replaceable flexible eye cover 1402 with an opening 1408 that can be attached and removed quickly from the HWC 102 through the use of magnets 1404. Other attachment methods may be used, but for illustration of the present disclosure we will focus on a magnet implementation. In embodiments, magnets may be included in the eye cover 1402 and magnets of an opposite polarity may be included (e.g. embedded) in the frame of the HWC 102. The magnets of the two elements would attract quite strongly with the opposite polarity configuration. In another embodiment, one of the elements may have a magnet and the other side may have metal for the attraction. In embodiments, the eye cover 1402 is a flexible elastomeric shield. In embodiments, the eye cover 1402 may be an elastomeric bellows design to accommodate flexibility and more closely align with the wearer's face. Figure 14b illustrates a removable and replaceable flexible eye cover 1402 that is adapted as a single eye cover. In embodiments, a single eye cover may be used for each side of the HWC to cover both eyes of the wearer. In

embodiments, the single eye cover may be used in connection with a HWC that includes only one computer display for one eye. These configurations prevent light that is generated and directed generally towards the wearer's face by covering the space between the wearer's face and the HWC. The opening 1408 allows the wearer to look through the opening 1408 to view the displayed content and the surrounding environment through the front of the HWC. The image light in the lower optical module 204 can be prevented from emitting from the front of the HWC through internal optics polarization schemes, as described herein, for example.

[000174] Figure 14c illustrates another embodiment of a light suppression system. In this embodiment, the eye cover 1410 may be similar to the eye cover 1402, but eye cover 1410 includes a front light shield 1412. The front light shield 1412 may be opaque to prevent light from escaping the front lens of the HWC. In other embodiments, the front light shield 1412 is polarized to prevent light from escaping the front lens. In a polarized arrangement, in embodiments, the internal optical elements of the HWC (e.g. of the lower optical module 204) may polarize light transmitted towards the front of the HWC and the front light shield 1412 may be polarized to prevent the light from transmitting through the front light shield 1412.

[000175] In embodiments, an opaque front light shield 1412 may be included and the digital content may include images of the surrounding environment such that the wearer can visualize the surrounding environment. One eye may be presented with night vision environmental imagery and this eye's surrounding environment optical path may be covered using an opaque front light shield 1412. In other embodiments, this arrangement may be associated with both eyes.

[000176] Another aspect of the present disclosure relates to automatically configuring the lighting system(s) used in the HWC 102. In embodiments, the display lighting and/or effects lighting, as described herein, may be controlled in a manner suitable for when an eye cover 1402 is attached or removed from the HWC 102. For example, at night, when the light in the environment is low, the lighting system(s) in the HWC may go into a low light mode to further control any amounts of stray light escaping from the HWC and the areas around the HWC. Covert operations at night, while using night vision or standard vision, may require a solution which prevents as much escaping light as possible so a user may clip on the eye cover(s) 1402 and then the HWC may go into a low light mode. The low light mode may, in some embodiments, only go into a low light mode when the eye cover 1402 is attached if the HWC identifies that the environment is in low light conditions (e.g. through environment light level sensor detection). In embodiments, the low light level may be determined to be at an intermediate point between full and low light dependent on environmental conditions.

[000177] Another aspect of the present disclosure relates to automatically controlling the type of content displayed in the HWC when eye covers 1402 are attached or removed from the HWC. In embodiments, when the eye cover(s) 1402 is attached to the HWC, the displayed content may be restricted in amount or in color amounts. For example, the display(s) may go into a simple content delivery mode to restrict the amount of information displayed. This may be done to reduce the amount of light produced by the display(s). In an embodiment, the display(s) may change from color displays to monochrome displays to reduce the amount of light produced. In an embodiment, the monochrome lighting may be red to limit the impact on the wearer' s eyes to maintain an ability to see better in the dark.

[000178] Another aspect of the present disclosure relates to a system adapted to quickly convert from a see-through system to a non-see-through or very low transmission see-through system for a more immersive user experience. The conversion system may include replaceable lenses, an eye cover, and optics adapted to provide user experiences in both modes. The lenses, for example, may be 'blacked-out' to provide an experience where all of the user' s attention is dedicated to the digital content and then the lenses may be switched out for high see-through lenses so the digital content is augmenting the user' s view of the surrounding environment. Another aspect of the disclosure relates to low transmission lenses that permit the user to see through the lenses but remain dark enough to maintain most of the user's attention on the digital content. The slight see-through can provide the user with a visual connection to the surrounding environment and this can reduce or eliminate nausea and other problems associated with total removal of the surrounding view when viewing digital content.

[000179] Figure 14d illustrates a head-worn computer system 102 with a see-through digital content display 204 adapted to include a removable outer lens 1414 and a removable eye cover 1402. The eye cover 1402 may be attached to the head- worn computer 102 with magnets 1404 or other attachment systems (e.g. mechanical attachments, a snug friction fit between the arms of the head-worn computer 102, etc.). The eye cover 1402 may be attached when the user wants to cut stray light from escaping the confines of the head- worn computer, create a more immersive experience by removing the otherwise viewable peripheral view of the surrounding environment, etc. The removable outer lens may be of several varieties for various experiences. It may have no transmission or a very low transmission to create a dark background for the digital content, creating an immersive experience for the digital content. It may have a high transmission so the user can see through the see-through display and the lens to view the surrounding environment, creating a system for a heads-up display, augmented reality display, assisted reality display, etc. The lens 1414 may be dark in a middle portion to provide a dark background for the digital content (i.e. dark backdrop behind the see-through field of view from the user's perspective) and a higher transmission area elsewhere. The lenses 1414 may have a transmission in the range of 2 to 5%, 5 to 10%, 10 to 20% for the immersion effect and above 10% or 20% for the augmented reality effect, for example. The lenses 1414 may also have an adjustable transmission to facilitate the change in system effect. For example, the lenses 1414 may be electronically adjustable tint lenses (e.g. liquid crystal or have crossed polarizers with an adjustment for the level of cross).

[000180] In embodiments, the eye cover may have areas of transparency or partial transparency to provide some visual connection with the user' s surrounding environment. This may also reduce or eliminate nausea or other feelings associated with the complete removal of the view of the surrounding environment.

[000181] Figure 14e illustrates a head-worn computer 102 assembled with an eye cover 1402 without lenses in place. The lenses, in embodiments, may be held in place with magnets 1418 for ease of removal and replacement. In embodiments, the lenses may be held in place with other systems, such as mechanical systems.

[000182] Another aspect of the present disclosure relates to an effects system that generates effects outside of the field of view in the see-through display of the head-worn computer. The effects may be, for example, lighting effects, sound effects, tactile effects (e.g. through vibration), air movement effects, etc. In embodiments, the effect generation system is mounted on the eye cover 1402. For example, a lighting system (e.g. LED(s), OLEDs, etc.) may be mounted on an inside surface 1420, or exposed through the inside surface 1420, as illustrated in Figure 14f, such that they can create a lighting effect (e.g. a bright light, colored light, subtle color effect) in coordination with content being displayed in the field of view of the see-through display. The content may be a movie or a game, for example, and an explosion may happen on the right side of the content, as scripted, and matching the content, a bright flash may be generated by the effects lighting system to create a stronger effect. As another example, the effects system may include a vibratory system mounted near the sides or temples, or otherwise, and when the same explosion occurs, the vibratory system may generate a vibration on the right side to increase the user experience indicating that the explosion had a real sound wave creating the vibration. As yet a further example, the effects system may have an air system where the effect is a puff of air blown onto the user's face. This may create a feeling of closeness with some fast moving object in the content. The effects system may also have speakers directed towards the user's ears or an attachment for ear buds, etc.

[000183] In embodiments, the effects generated by the effects system may be scripted by an author to coordinate with the content. In embodiments, sensors may be placed inside of the eye cover to monitor content effects (e.g. a light sensor to measure strong lighting effects or peripheral lighting effects) that would than cause an effect(s) to be generated.

[000184] The effects system in the eye cover may be powered by an internal battery and the battery, in embodiments, may also provide additional power to the head-worn computer 102 as a back-up system. In embodiments, the effects system is powered by the batteries in the head-worn computer. Power may be delivered through the attachment system (e.g. magnets, mechanical system) or a dedicated power system.

[000185] The effects system may receive data and/or commands from the head-worn computer through a data connection that is wired or wireless. The data may come through the attachment system, a separate line, or through Bluetooth or other short range

communication protocol, for example.

[000186] In embodiments, the eye cover is made of reticulated foam, which is very light and can contour to the user's face. The reticulated foam also allows air to circulate because of the open-celled nature of the material, which can reduce user fatigue and increase user comfort. The eye cover may be made of other materials, soft, stiff, priable, etc. and may have another material on the periphery that contacts the face for comfort. In embodiments, the eye cover may include a fan to exchange air between an external environment and an internal space, where the internal space is defined in part by the face of the user. The fan may operate very slowly and at low power to exchange the air to keep the face of the user cool. In embodiments the fan may have a variable speed controller and/or a temperature sensor may be positioned to measure temperature in the internal space to control the temperature in the internal space to a specified range, temperature, etc. The internal space is generally characterized by the space confined space in front of the user' s eyes and upper cheeks where the eye cover encloses the area.

[000187] Another aspect of the present disclosure relates to flexibly mounting an audio headset on the head-worn computer 102 and/or the eye cover 1402. In embodiments, the audio headset is mounted with a relatively rigid system that has flexible joint(s) (e.g. a rotational joint at the connection with the eye cover, a rotational joint in the middle of a rigid arm, etc.) and extension(s) (e.g. a telescopic arm) to provide the user with adjustability to allow for a comfortable fit over, in or around the user's ear. In embodiments, the audio headset is mounted with a flexible system that is more flexible throughout, such as with a wire-based connection.

[000188] Figure 14g illustrates a head-worn computer 102 with removable lenses 1414 along with a mounted eye cover 1402. The head-worn computer, in embodiments, includes a see-through display (as disclosed herein). The eye cover 1402 also includes a mounted audio headset 1422. The mounted audio headset 1422 in this embodiment is mounted to the eye cover 1402 and has audio wire connections (not shown). In embodiments, the audio wires' connections may connect to an internal wireless communication system (e.g. Bluetooth, NFC, WiFi) to make connection to the processor in the head-worn computer. In embodiments, the audio wires may connect to a magnetic connector, mechanical connector or the like to make the connection.

[000189] Figure 14h illustrates an unmounted eye cover 1402 with a mounted audio headset 1422. As illustrated, the mechanical design of the eye cover is adapted to fit onto the head- worn computer to provide visual isolation or partial isolation and the audio headset.

[000190] In embodiments, the eye cover 1402 may be adapted to be removably mounted on a head-worn computer 102 with a see-through computer display. An audio headset 1422 with an adjustable mount may be connected to the eye cover, wherein the adjustable mount may provide extension and rotation to provide a user of the head- worn computer with a mechanism to align the audio headset with an ear of the user. In

embodiments, the audio headset includes an audio wire connected to a connector on the eye cover and the eye cover connector may be adapted to removably mate with a connector on the head-worn computer. In embodiments, the audio headset may be adapted to receive audio signals from the head-worn computer through a wireless connection (e.g. Bluetooth, WiFi). As described elsewhere herein, the head-worn computer may have a removable and replaceable front lens. The eye cover may include a battery to power systems internal to the eye cover. The eye cover may have a battery to power systems internal to the head-worn computer.

[000191] In embodiments, the eye cover may include a fan adapted to exchange air between an internal space, defined in part by the user's face, and an external environment to cool the air in the internal space and the user's face. In embodiments, the audio headset may include a vibratory system (e.g. a vibration motor, piezo motor, etc. in the armature and/or in the section over the ear) adapted to provide the user with a haptic feedback coordinated with digital content presented in the see-through computer display. In embodiments, the head- worn computer includes a vibratory system adapted to provide the user with a haptic feedback coordinated with digital content presented in the see-through computer display.

[000192] In embodiments, the eye cover 1402 is adapted to be removably mounted on a head- worn computer with a see-through computer display. The eye cover may also include a flexible audio headset mounted to the eye cover, wherein the flexibility provides the user of the head- worn computer with a mechanism to align the audio headset with an ear of the user. In embodiments, the flexible audio headset is mounted to the eye cover with a magnetic connection. In embodiments, the flexible audio headset may be mounted to the eye cover with a mechanical connection.

[000193] In embodiments, the audio head set may be spring or otherwise loaded such that the head set presses inward towards the user's ears for a more secure fit.

[000194] Another aspect of the present disclosure involves a head- worn computer with stereo scene cameras. In embodiments, two or more cameras and/or sensors are mounted behind the outer lens(es) of the head-worn computer. In embodiments, the lens(es) is tinted to conceal the cameras. In embodiments, the lens(es) has a more transparent area in front of the camera(s) to allow for high transparency scene image capture.

[000195] Figure 14i illustrates a head-worn computer 102 with see-through computer displays, where two cameras 1424a and 1424b are mounted on a frame front surface 1428 such that they are concealed by lenses (not shown in Figure 14i). The front surface 1428 further conceals the image engine, or upper optics. The see-through displays, in this illustration, are mounted below the frame front surface 1428. The user's view is through the portion of the optics that is mounted below the front surface 1428.

[000196] In embodiments, a head- worn computer, comprising an optical system with at least two portions, a first optical portion that generates an image and a second optical portion that provides the image to a user' s eye and through which the user views a surrounding environment; wherein the first portion is substantially concealed from view by a frame above the user's eye; the frame having a front surface concealing a portion of the first optical portion, the front surface further positioned to be at least partially concealed from view by a removable lens; and a scene camera positioned to view the surrounding environment through the frame front surface. In embodiments, the head- worn computer may have a removable lens, when mounted on the head- worn computer, that conceals the scene camera. In embodiments, the head-worn computer may further comprise a second scene camera, wherein the scene camera and the second scene camera are mounted with the scene camera over a first see-through display and the second scene camera over a second see- through display.

[000197] Another aspect of the present disclosure relates to concealing a speaker system within a frame of a head- worn computer. Such a system can create a head- worn computer that provides audio to the user without requiring the user to attach or manipulate ear buds or other speaker systems into or onto the ear. The head- worn computer may have an optional jack, magnetic connector or the like for the attachment of a separate speaker system that can be put into, onto or otherwise proximate the user' s ear. In embodiments, the internal speaker system is arranged to output audio proximate the ear once the head-worn computer is mounted on the user' s head.

[000198] Figure 14J illustrates a head-worn computer that includes an internal speaker system 1438. The internal speaker 1438, in embodiments, is mounted internal to an arm of the head-worn computer. The arm may be described as having a temple portion 1432 and ear-horn 1434, where the arm is attached to a front frame 1430. In embodiments, the internal speaker system 1438 is positioned in the temple portion 1432 proximate the user's ear 1440 (e.g. in front of the user's ear with the output from the internal audio system pointing at the ear or above the ear with the output from the internal audio system pointing at the ear).

[000199] Figure 14J2 illustrates components of an example internal speaker system 1438. In this embodiment, the internal speaker system is positioned within the temple portion 1432. The components of the internal speaker system, in embodiments, includes a speaker 1448 that generates the audio with several cavities to provide enhanced sound for the user. A back cavity 1444 is positioned behind the speaker 1448 and an extended back cavity 1444 is positioned in air communication with the back cavity 1444. The extended back cavity 1442 is vented out of the temple portion 1432 to outside atmosphere to provide the speaker with a back vent with back cavities. In embodiments, there may only be a single back cavity or two or more back cavities. The internal speaker system 1438 may also include a front cavity 1450 in front of the speaker 1448. The front cavity 1450 may have a vent to outside atmosphere to provide the speaker with an outlet for the audio provided to the user's ear. The front cavity may be a single cavity, or multiple cavity configuration.

[000200] While the embodiment in Figure 14J2 illustrates particular geometries with respect to the speaker 1448, cavities, and vents, it should be understood that the inventors appreciate that the various elements of the internal speaker system 1438 may be otherwise configured. For example, the configuration of the embodiment illustrated in Figure 14J is determined to some extent by the fact that there is a battery compartment 1454 in the temple portion 1432. Without a battery, with a smaller battery, larger battery, etc., the configuration of the speaker 1448, cavities and vents may be differently configured. In embodiments, the temple portion 1432 is tubular in design, which may include a round battery, and the cavities may be elongated to traverse the length of the tubular design.

[000201] Figure 14K illustrates a head-worn computer with internal speaker systems 1438 mounted in a front frame 1430. In this configuration the sound emanating from the internal speaker systems 1438 is projected rearward 1458 towards the user's ears.

[000202] Another aspect of the present disclosure relates to providing an audio system with audio extension tubes for the transfer of audio from within a head-worn computer to the ear of a user.

[000203] Figure 14ka illustrates a speaker system 1468 that produces sound in accordance with audio signals from within the head-worn computer. The output of the sound propagates through an extension tube 1474 towards the user's ear. The tube may terminate close to the ear, have an end that fits into the ear, over the ear, etc. In embodiments the extension tube 1474 may be made of soft, medium or hard material. The extension tube may further be adapted to hold a general shape such that it ends close to the user' s ear even after a number of uses. In embodiments, the extension tube may further be adapted to be mechanically and/or magnetically attachable to the head-worn computer such that the extension tube 1474 aligns with a sound output of the speaker system 1468. For example, as illustrated, a countersunk magnet 1470 may be mounted in the head-worn computer and a mating conical magnet 1472 may be mounted on the extension tube 1474 such that it mates with the countersunk magnet 1470 and aligns the extension tube 1474 with the sound output of the speaker system 1468.

[000204] Figure 14kb illustrates a head-worn system with a balance armature speaker system 1476 mounted within the head-worn display. The audio extension tube 1474 in this embodiment is inserted with a snug fit in the head- worn computer temple portion at the sound outlet of the balance armature speaker system 1476. The audio extension tube 1474 may be sized to tune the sound produced for the ear. For example, the length, diameter, wall thickness, etc. may be sized to tune the sound. The audio extension tube 1474 may also have features on the inner surface to alter sound.

[000205] Figure 14kc illustrates an embodiment where the balance armature speaker 1476 ports its sound output into an audio tuning chamber 1478 to tune the sound. Figure 14kd illustrates an embodiment using a dynamic micro-speaker that pumps sound into the audio tuning chamber 1478. Figure 14ke illustrates an embodiment where a balance armature speaker 1476 and a dynamic micro-speaker 1480 pump sound into the audio tuning chamber 1478 for further enhanced sound production. In embodiments, haptics may also be included to enhance the base, low tone sounds, experience produced by the system.

[000206] Another aspect of the speaker system may involve having audio outlet hole in the head-worn computer such that the user can hear the audio without an extension tube (e.g. as illustrated and described in connection with Figures 14J and 14K). In embodiments, the audio extension tube 1474 may be optional. Figure 14kf illustrates an audio extension tube 1474 with an outlet cover 1484 formed and positioned to cover the speaker grill (e.g. audio outlet holes). The cover 1484 may include features that fit into the audio outlet holes. Figure 14kg illustrates a speaker system where the speaker grill can be covered with an internal flap 1486. The internal flap 1486 may pivot, or otherwise move, and move into position to cover the speaker grill when the audio extension tube is inserted or otherwise mounted.

[000207] Another aspect of the speaker system includes an outer flap to direct the sound from the speaker grill towards the user's ear. Figure 14kh illustrates an outer flap 1488 positioned near the speaker grill such that the outer flap 1488 redirects at least some of the sound coming out of the speaker grill. In embodiments, the outer flap may fold, slide or otherwise move in and out of the temple portion or it may be in a fixed position. The outer flap may be rigid, flexible, etc. Figure 14kg illustrates a system where the outer flap slides in and out of the temple. [000208] Another aspect of the present disclosure relates to electrical systems for making connections to removable electrochromic lenses. Electrochromic lenses can be quite useful on a head- worn computer with a see-through computer display. The electrochromic layer has the ability to switch from a relatively clear mode to a tinted or dark mode, which can help a user see through the lens easily or darken the background for the digital content presented in the computer display. The inventors have appreciated that, while electrochromic lenses are good in certain circumstances they may not be ideal for all scenarios. A head-worn computer with removable and replaceable lenses (e.g. as described herein elsewhere) provides a flexible platform that the user can customize. In embodiments, the head-worn computer has removable electrochromic lenses with electrical connections adapted to make solid electrical contact when the lens is mounted onto the head-worn computer.

[000209] In embodiments, the electrochromic layer of a lens is a film or other material coating one side of the lens. The electrochromic layer includes multiple surfaces, two of which are electrically conductive and meant to generate the necessary charge to change the state of the electrochromic layer. In embodiments, electrodes are provided in the head-worn computer that mate with the electrically conductive layers to provide variable power. The power may be controlled between two states (e.g. on and off), multiple states (e.g. multiple power levels, pulse width modulated power levels), continuous control, discrete control, etc. The power delivered to the electrochromic layer may be controlled by a processor in the head-worn computer. In embodiments, the power is user controlled, automatically controlled based on external conditions (e.g. bright environment, dark environment), automatically controlled based on application settings, etc.

[000210] Figure 14L illustrates a head-worn computer 102 with removable electrochromic lenses 1461. In this embodiment, the removable electrochromic lens is mechanically held in place with lens magnets 1418 (e.g. as described herein elsewhere). The head-worn computer further includes electrochromic connections 1460. The two

electrochromic connections 1460a are positioned such that they will make contact with two separate electrically conductive layers within the electrochromatic layer that is on the inside of the lens 1461. In embodiments, the electrochromic connections 1460 may be magnetic and the two pairs 1460a and 1460b may be positioned to attract one another such that the electrically conductive layers are squeezed between the magnetic elements to ensure good electrical contact with the connections 1460a, which provide the power to the electrochromic layer for the tint control thereof. While this embodiment illustrates the removable lens 1461 as being held in place with lens magnets 1418, the inventors appreciate that the lens may be held in place with mechanical systems (e.g. tabs, notches, etc.) or other systems. Further, embodiments may use a dual purpose lens attachment and power systems. For example, the head- worn computer 102 may have a pair of connectors that provide mechanical strength to hold the lens in place and a power connection to provide power to the electrochromic lens. In embodiments, the electrochromic layer is on the outside of the lens 1461 and the

electrochromic connections 1460 mate to create an electrical connection between them. The electrochromic connections 1460b may then be in electrical contact with the electrochromic layer to provide the requisite power.

[000211] Figure 14m illustrates two embodiments of electrochromic lens connections. Each embodiment is illustrating the lens and head- worn computer portion in a cross section to provide description of how the connectors may operate. Lens 1414a includes an

electrochromatic layer 1462, lens magnet 1418b and an electrochromic connector magnet 1460b. The portion of the head-worn computer 102 shown illustrates a lens magnet 1418a, electrochromic connector magnet 1460a and a power line 1468 connected to the

electrochromic connector magnet 1460a to provide power to the electrochromic layer. As can be seen, the lens magnets 1418a and 1418b are positioned opposite one another to attract one another to hold the lens in place. Similarly, electrochromic connector magnets 1460a and 1460b are positioned opposite one another such that they attract and squeeze the powered connector, electrochromic connector magnet 1460a, against an electrically conductive layer of the electrochromic layer 1462. The electrochromic layer 1462 is arranged such that the appropriate electrically conductive layer is exposed for the connection to the powered connector 1460a. In another embodiment, the head-worn computer 102b has an

electrochromic connector pressure pin (e.g. a pogo pin with a spring loaded tip to apply pressure on the electrochromic layer as it gets compressed to make good electrical contact). In this embodiment, the lens is mounted and the pin provides the power 1468 to the appropriate electrically conductive layer of the electrochromic layer.

[000212] Referring to Fig. 15, we now turn to describe a particular external user interface 104, referred to generally as a pen 1500. The pen 1500 is a specially designed external user interface 104 and can operate as a user interface, such as to many different styles of HWC 102. The pen 1500 generally follows the form of a conventional pen, which is a familiar user handled device and creates an intuitive physical interface for many of the operations to be carried out in the HWC system 100. The pen 1500 may be one of several user interfaces 104 used in connection with controlling operations within the HWC system 100. For example, the HWC 102 may watch for and interpret hand gestures 116 as control signals, where the pen 1500 may also be used as a user interface with the same HWC 102. Similarly, a remote keyboard may be used as an external user interface 104 in concert with the pen 1500. The combination of user interfaces or the use of just one control system generally depends on the operation(s) being executed in the HWC's system 100.

[000213] While the pen 1500 may follow the general form of a conventional pen, it contains numerous technologies that enable it to function as an external user interface 104. Fig. 15 illustrates technologies comprised in the pen 1500. As can be seen, the pen 1500 may include a camera 1508, which is arranged to view through lens 1502. The camera may then be focused, such as through lens 1502, to image a surface upon which a user is writing or making other movements to interact with the HWC 102. There are situations where the pen 1500 will also have an ink, graphite, or other system such that what is being written can be seen on the writing surface. There are other situations where the pen 1500 does not have such a physical writing system so there is no deposit on the writing surface, where the pen would only be communicating data or commands to the HWC 102. The lens configuration is described in greater detail herein. The function of the camera is to capture information from an unstructured writing surface such that pen strokes can be interpreted as intended by the user. To assist in the predication of the intended stroke path, the pen 1500 may include a sensor, such as an IMU 1512. Of course, the IMU could be included in the pen 1500 in its separate parts (e.g. gyro, accelerometer, etc.) or an IMU could be included as a single unit. In this instance, the IMU 1512 is used to measure and predict the motion of the pen 1500. In turn, the integrated microprocessor 1510 would take the IMU information and camera information as inputs and process the information to form a prediction of the pen tip movement.

[000214] The pen 1500 may also include a pressure monitoring system 1504, such as to measure the pressure exerted on the lens 1502. As will be described in greater detail herein, the pressure measurement can be used to predict the user's intention for changing the weight of a line, type of a line, type of brush, click, double click, and the like. In

embodiments, the pressure sensor may be constructed using any force or pressure measurement sensor located behind the lens 1502, including for example, a resistive sensor, a current sensor, a capacitive sensor, a voltage sensor such as a piezoelectric sensor, and the like.

[000215] The pen 1500 may also include a communications module 1518, such as for bi-directional communication with the HWC 102. In embodiments, the communications module 1518 may be a short distance communication module (e.g. Bluetooth). The communications module 1518 may be security matched to the HWC 102. The

communications module 1518 may be arranged to communicate data and commands to and from the microprocessor 1510 of the pen 1500. The microprocessor 1510 may be

programmed to interpret data generated from the camera 1508, IMU 1512, and pressure sensor 1504, and the like, and then pass a command onto the HWC 102 through the communications module 1518, for example. In another embodiment, the data collected from any of the input sources (e.g. camera 1508, IMU 1512, pressure sensor 1504) by the microprocessor may be communicated by the communication module 1518 to the HWC 102, and the HWC 102 may perform data processing and prediction of the user's intention when using the pen 1500. In yet another embodiment, the data may be further passed on through a network 110 to a remote device 112, such as a server, for the data processing and prediction. The commands may then be communicated back to the HWC 102 for execution (e.g. display writing in the glasses display, make a selection within the UI of the glasses display, control a remote external device 112, control a local external device 108), and the like. The pen may also include memory 1514 for long or short term uses.

[000216] The pen 1500 may also include a number of physical user interfaces, such as quick launch buttons 1522, a touch sensor 1520, and the like. The quick launch buttons 1522 may be adapted to provide the user with a fast way of jumping to a software application in the HWC system 100. For example, the user may be a frequent user of communication software packages (e.g. email, text, Twitter, Instagram, Facebook, Google+, and the like), and the user may program a quick launch button 1522 to command the HWC 102 to launch an application. The pen 1500 may be provided with several quick launch buttons 1522, which may be user programmable or factory programmable. The quick launch button 1522 may be programmed to perform an operation. For example, one of the buttons may be programmed to clear the digital display of the HWC 102. This would create a fast way for the user to clear the screens on the HWC 102 for any reason, such as for example to better view the environment. The quick launch button functionality will be discussed in further detail below. The touch sensor 1520 may be used to take gesture style input from the user. For example, the user may be able to take a single finger and run it across the touch sensor 1520 to affect a page scroll.

[000217] The pen 1500 may also include a laser pointer 1524. The laser pointer 1524 may be coordinated with the IMU 1512 to coordinate gestures and laser pointing. For example, a user may use the laser 1524 in a presentation to help with guiding the audience with the interpretation of graphics and the IMU 1512 may, either simultaneously or when the laser 1524 is off, interpret the user's gestures as commands or data input. [000218] Figs. 16A-C illustrate several embodiments of lens and camera arrangements 1600 for the pen 1500. One aspect relates to maintaining a constant distance between the camera and the writing surface to enable the writing surface to be kept in focus for better tracking of movements of the pen 1500 over the writing surface. Another aspect relates to maintaining an angled surface following the circumference of the writing tip of the pen 1500 such that the pen 1500 can be rolled or partially rolled in the user's hand to create the feel and freedom of a conventional writing instrument.

[000219] Fig. 16A illustrates an embodiment of the writing lens end of the pen 1500. The configuration includes a ball lens 1604, a camera or image capture surface 1602, and a domed cover lens 1608. In this arrangement, the camera views the writing surface through the ball lens 1604 and dome cover lens 1608. The ball lens 1604 causes the camera to focus such that the camera views the writing surface when the pen 1500 is held in the hand in a natural writing position, such as with the pen 1500 in contact with a writing surface. In embodiments, the ball lens 1604 should be separated from the writing surface to obtain the highest resolution of the writing surface at the camera 1602. In embodiments, the ball lens 1604 is separated by approximately 1 to 3 mm. In this configuration, the domed cover lens 1608 provides a surface that can keep the ball lens 1604 separated from the writing surface at a constant distance, such as substantially independent of the angle used to write on the writing surface. For instance, in embodiments the field of view of the camera in this arrangement would be approximately 60 degrees.

[000220] The domed cover lens, or other lens 1608 used to physically interact with the writing surface, will be transparent or transmissive within the active bandwidth of the camera 1602. In embodiments, the domed cover lens 1608 may be spherical or other shape and comprised of glass, plastic, sapphire, diamond, and the like. In other embodiments where low resolution imaging of the surface is acceptable. The pen 1500 can omit the domed cover lens 1608 and the ball lens 1604 can be in direct contact with the surface.

[000221] Fig. 16B illustrates another structure where the construction is somewhat similar to that described in connection with Fig. 16 A; however, this embodiment does not use a dome cover lens 1608, but instead uses a spacer 1610 to maintain a predictable distance between the ball lens 1604 and the writing surface, wherein the spacer may be spherical, cylindrical, tubular or other shape that provides spacing while allowing for an image to be obtained by the camera 1602 through the lens 1604. In a preferred embodiment, the spacer 1610 is transparent. In addition, while the spacer 1610 is shown as spherical, other shapes such as an oval, doughnut shape, half sphere, cone, cylinder or other form may be used. [000222] Figure 16C illustrates yet another embodiment, where the structure includes a post 1614, such as running through the center of the lensed end of the pen 1500. The post 1614 may be an ink deposition system (e.g. ink cartridge), graphite deposition system (e.g. graphite holder), or a dummy post whose purpose is mainly only that of alignment. The selection of the post type is dependent on the pen's use. For instance, in the event the user wants to use the pen 1500 as a conventional ink depositing pen as well as a fully functional external user interface 104, the ink system post would be the best selection. If there is no need for the 'writing' to be visible on the writing surface, the selection would be the dummy post. The embodiment of Fig. 16C includes camera(s) 1602 and an associated lens 1612, where the camera 1602 and lens 1612 are positioned to capture the writing surface without substantial interference from the post 1614. In embodiments, the pen 1500 may include multiple cameras 1602 and lenses 1612 such that more or all of the circumference of the tip 1614 can be used as an input system. In an embodiment, the pen 1500 includes a contoured grip that keeps the pen aligned in the user's hand so that the camera 1602 and lens 1612 remains pointed at the surface.

[000223] Another aspect of the pen 1500 relates to sensing the force applied by the user to the writing surface with the pen 1500. The force measurement may be used in a number of ways. For example, the force measurement may be used as a discrete value, or discontinuous event tracking, and compared against a threshold in a process to determine a user's intent. The user may want the force interpreted as a 'click' in the selection of an object, for instance. The user may intend multiple force exertions interpreted as multiple clicks. There may be times when the user holds the pen 1500 in a certain position or holds a certain portion of the pen 1500 (e.g. a button or touch pad) while clicking to affect a certain operation (e.g. a 'right click'). In embodiments, the force measurement may be used to track force and force trends. The force trends may be tracked and compared to threshold limits, for example. There may be one such threshold limit, multiple limits, groups of related limits, and the like. For example, when the force measurement indicates a fairly constant force that generally falls within a range of related threshold values, the microprocessor 1510 may interpret the force trend as an indication that the user desires to maintain the current writing style, writing tip type, line weight, brush type, and the like. In the event that the force trend appears to have gone outside of a set of threshold values intentionally, the microprocessor may interpret the action as an indication that the user wants to change the current writing style, writing tip type, line weight, brush type, and the like. Once the microprocessor has made a determination of the user' s intent, a change in the current writing style, writing tip type, line weight, brush type, and the like may be executed. In embodiments, the change may be noted to the user (e.g. in a display of the HWC 102), and the user may be presented with an opportunity to accept the change.

[000224] Fig. 17A illustrates an embodiment of a force sensing surface tip 1700 of a pen 1500. The force sensing surface tip 1700 comprises a surface connection tip 1702 (e.g. a lens as described herein elsewhere) in connection with a force or pressure monitoring system 1504. As a user uses the pen 1500 to write on a surface or simulate writing on a surface the force monitoring system 1504 measures the force or pressure the user applies to the writing surface and the force monitoring system communicates data to the microprocessor 1510 for processing. In this configuration, the microprocessor 1510 receives force data from the force monitoring system 1504 and processes the data to make predictions of the user's intent in applying the particular force that is currently being applied. In embodiments, the processing may be provided at a location other than on the pen (e.g. at a server in the HWC system 100, on the HWC 102). For clarity, when reference is made herein to processing information on the microprocessor 1510, the processing of information contemplates processing the information at a location other than on the pen. The microprocessor 1510 may be programmed with force threshold(s), force signature(s), force signature library and/or other characteristics intended to guide an inference program in determining the user' s intentions based on the measured force or pressure. The microprocessor 1510 may be further programmed to make inferences from the force measurements as to whether the user has attempted to initiate a discrete action (e.g. a user interface selection 'click') or is performing a constant action (e.g. writing within a particular writing style). The inferencing process is important as it causes the pen 1500 to act as an intuitive external user interface 104.

[000225] Fig. 17B illustrates a force 1708 versus time 1710 trend chart with a single threshold 1718. The threshold 1718 may be set at a level that indicates a discrete force exertion indicative of a user's desire to cause an action (e.g. select an object in a GUI). Event 1712, for example, may be interpreted as a click or selection command because the force quickly increased from below the threshold 1718 to above the threshold 1718. The event 1714 may be interpreted as a double click because the force quickly increased above the threshold 1718, decreased below the threshold 1718 and then essentially repeated quickly. The user may also cause the force to go above the threshold 1718 and hold for a period indicating that the user is intending to select an object in the GUI (e.g. a GUI presented in the display of the HWC 102) and 'hold' for a further operation (e.g. moving the object). [000226] While a threshold value may be used to assist in the interpretation of the user' s intention, a signature force event trend may also be used. The threshold and signature may be used in combination or either method may be used alone. For example, a single-click signature may be represented by a certain force trend signature or set of signatures. The single-click signature(s) may require that the trend meet a criteria of a rise time between x any y values, a hold time of between a and b values and a fall time of between c and d values, for example. Signatures may be stored for a variety of functions such as click, double click, right click, hold, move, etc. The microprocessor 1510 may compare the real-time force or pressure tracking against the signatures from a signature library to make a decision and issue a command to the software application executing in the GUI.

[000227] Fig. 17C illustrates a force 1708 versus time 1710 trend chart with multiple thresholds 1718. By way of example, the force trend is plotted on the chart with several pen force or pressure events. As noted, there are both presumably intentional events 1720 and presumably non-intentional events 1722. The two thresholds 1718 of Figure 4C create three zones of force: a lower, middle and higher range. The beginning of the trend indicates that the user is placing a lower zone amount of force. This may mean that the user is writing with a given line weight and does not intend to change the weight, the user is writing. Then the trend shows a significant increase 1720 in force into the middle force range. This force change appears, from the trend to have been sudden and thereafter it is sustained. The microprocessor 1510 may interpret this as an intentional change and as a result change the operation in accordance with preset rules (e.g. change line width, increase line weight, etc.). The trend then continues with a second apparently intentional event 1720 into the higher- force range. During the performance in the higher-force range, the force dips below the upper threshold 1718. This may indicate an unintentional force change and the

microprocessor may detect the change in range however not affect a change in the operations being coordinated by the pen 1500. As indicated above, the trend analysis may be done with thresholds and/or signatures.

[000228] Generally, in the present disclosure, instrument stroke parameter changes may be referred to as a change in line type, line weight, tip type, brush type, brush width, brush pressure, color, and other forms of writing, coloring, painting, and the like.

[000229] Another aspect of the pen 1500 relates to selecting an operating mode for the pen 1500 dependent on contextual information and/or selection interface(s). The pen 1500 may have several operating modes. For instance, the pen 1500 may have a writing mode where the user interface(s) of the pen 1500 (e.g. the writing surface end, quick launch buttons 1522, touch sensor 1520, motion based gesture, and the like) is optimized or selected for tasks associated with writing. As another example, the pen 1500 may have a wand mode where the user interface(s) of the pen is optimized or selected for tasks associated with software or device control (e.g. the HWC 102, external local device, remote device 112, and the like). The pen 1500, by way of another example, may have a presentation mode where the user interface(s) is optimized or selected to assist a user with giving a presentation (e.g. pointing with the laser pointer 1524 while using the button(s) 1522 and/or gestures to control the presentation or applications relating to the presentation). The pen may, for example, have a mode that is optimized or selected for a particular device that a user is attempting to control. The pen 1500 may have a number of other modes and an aspect of the present disclosure relates to selecting such modes.

[000230] Fig. 18A illustrates an automatic user interface(s) mode selection based on contextual information. The microprocessor 1510 may be programmed with IMU thresholds 1814 and 1812. The thresholds 1814 and 1812 may be used as indications of upper and lower bounds of an angle 1804 and 1802 of the pen 1500 for certain expected positions during certain predicted modes. When the microprocessor 1510 determines that the pen 1500 is being held or otherwise positioned within angles 1802 corresponding to writing thresholds 1814, for example, the microprocessor 1510 may then institute a writing mode for the pen's user interfaces. Similarly, if the microprocessor 1510 determines (e.g. through the IMU 1512) that the pen is being held at an angle 1804 that falls between the predetermined wand thresholds 1812, the microprocessor may institute a wand mode for the pen's user interface. Both of these examples may be referred to as context based user interface mode selection as the mode selection is based on contextual information (e.g. position) collected automatically and then used through an automatic evaluation process to automatically select the pen' s user interface(s) mode.

[000231] As with other examples presented herein, the microprocessor 1510 may monitor the contextual trend (e.g. the angle of the pen over time) in an effort to decide whether to stay in a mode or change modes. For example, through signatures, thresholds, trend analysis, and the like, the microprocessor may determine that a change is an

unintentional change and therefore no user interface mode change is desired.

[000232] Fig. 18B illustrates an automatic user interface(s) mode selection based on contextual information. In this example, the pen 1500 is monitoring (e.g. through its microprocessor) whether or not the camera at the writing surface end 1508 is imaging a writing surface in close proximity to the writing surface end of the pen 1500. If the pen 1500 determines that a writing surface is within a predetermined relatively short distance, the pen 1500 may decide that a writing surface is present 1820 and the pen may go into a writing mode user interface(s) mode. In the event that the pen 1500 does not detect a relatively close writing surface 1822, the pen may predict that the pen is not currently being used to as a writing instrument and the pen may go into a non- writing user interface(s) mode.

[000233] Fig. 18C illustrates a manual user interface(s) mode selection. The user interface(s) mode may be selected based on a twist of a section 1824 of the pen 1500 housing, clicking an end button 1828, pressing a quick launch button 1522, interacting with touch sensor 1520, detecting a predetermined action at the pressure monitoring system (e.g. a click), detecting a gesture (e.g. detected by the IMU), etc. The manual mode selection may involve selecting an item in a GUI associated with the pen 1500 (e.g. an image presented in the display of HWC 102).

[000234] In embodiments, a confirmation selection may be presented to the user in the event a mode is going to change. The presentation may be physical (e.g. a vibration in the pen 1500), through a GUI, through a light indicator, etc.

[000235] Fig. 19 illustrates a couple pen use-scenarios 1900 and 1901. There are many use scenarios and we have presented a couple in connection with Fig. 19 as a way of illustrating use scenarios to further the understanding of the reader. As such, the use- scenarios should be considered illustrative and non- limiting.

[000236] Use scenario 1900 is a writing scenario where the pen 1500 is used as a writing instrument. In this example, quick launch button 122A is pressed to launch a note application 1910 in the GUI 1908 of the HWC 102 display 1904. Once the quick launch button 122A is pressed, the HWC 102 launches the note program 1910 and puts the pen into a writing mode. The user uses the pen 1500 to scribe symbols 1902 on a writing surface, the pen records the scribing and transmits the scribing to the HWC 102 where symbols representing the scribing are displayed 1912 within the note application 1910.

[000237] Use scenario 1901 is a gesture scenario where the pen 1500 is used as a gesture capture and command device. In this example, the quick launch button 122B is activated and the pen 1500 activates a wand mode such that an application launched on the HWC 102 can be controlled. Here, the user sees an application chooser 1918 in the display(s) of the HWC 102 where different software applications can be chosen by the user. The user gestures (e.g. swipes, spins, turns, etc.) with the pen to cause the application chooser 1918 to move from application to application. Once the correct application is identified (e.g. highlighted) in the chooser 1918, the user may gesture or click or otherwise interact with the pen 1500 such that the identified application is selected and launched. Once an application is launched, the wand mode may be used to scroll, rotate, change applications, select items, initiate processes, and the like, for example.

[000238] In an embodiment, the quick launch button 122 A may be activated and the HWC 102 may launch an application chooser presenting to the user a set of applications. For example, the quick launch button may launch a chooser to show all communication programs (e.g. SMS, Twitter, Instagram, Facebook, email, etc.) available for selection such that the user can select the program the user wants and then go into a writing mode. By way of further example, the launcher may bring up selections for various other groups that are related or categorized as generally being selected at a given time (e.g. Microsoft Office products, communication products, productivity products, note products, organizational products, and the like)

[000239] Fig. 20 illustrates yet another embodiment of the present disclosure. Figure 2000 illustrates a watchband clip on controller 2000. The watchband clip on controller may be a controller used to control the HWC 102 or devices in the HWC system 100. The watchband clip on controller 2000 has a fastener 2018 (e.g. rotatable clip) that is

mechanically adapted to attach to a watchband, as illustrated at 2004.

[000240] The watchband controller 2000 may have quick launch interfaces 2008 (e.g. to launch applications and choosers as described herein), a touch pad 2014 (e.g. to be used as a touch style mouse for GUI control in a HWC 102 display) and a display 2012. The clip 2018 may be adapted to fit a wide range of watchbands so it can be used in connection with a watch that is independently selected for its function. The clip, in embodiments, is rotatable such that a user can position it in a desirable manner. In embodiments the clip may be a flexible strap. In embodiments, the flexible strap may be adapted to be stretched to attach to a hand, wrist, finger, device, weapon, and the like.

[000241] In embodiments, the watchband controller may be configured as a removable and replaceable watchband. For example, the controller may be incorporated into a band with a certain width, segment spacing' s, etc. such that the watchband, with its incorporated controller, can be attached to a watch body. The attachment, in embodiments, may be mechanically adapted to attach with a pin upon which the watchband rotates. In

embodiments, the watchband controller may be electrically connected to the watch and/or watch body such that the watch, watch body and/or the watchband controller can

communicate data between them. [000242] The watchband controller may have 3-axis motion monitoring (e.g. through an IMU, accelerometers, magnetometers, gyroscopes, etc.) to capture user motion. The user motion may then be interpreted for gesture control.

[000243] In embodiments, the watchband controller may comprise fitness sensors and a fitness computer. The sensors may track heart rate, calories burned, strides, distance covered, and the like. The data may then be compared against performance goals and/or standards for user feedback.

[000244] Another aspect of the present disclosure relates to visual display techniques relating to micro Doppler ("mD") target tracking signatures ("mD signatures"). mD is a radar technique that uses a series of angle dependent electromagnetic pulses that are broadcast into an environment and return pulses are captured. Changes between the broadcast pulse and return pulse are indicative of changes in the shape, distance and angular location of objects or targets in the environment. These changes provide signals that can be used to track a target and identify the target through the mD signature. Each target or target type has a unique mD signature. Shifts in the radar pattern can be analyzed in the time domain and frequency domain based on mD techniques to derive information about the types of targets present (e.g. whether people are present), the motion of the targets and the relative angular location of the targets and the distance to the targets. By selecting a frequency used for the mD pulse relative to known objects in the environment, the pulse can penetrate the known objects to enable information about targets to be gathered even when the targets are visually blocked by the known objects. For example, pulse frequencies can be used that will penetrate concrete buildings to enable people to be identified inside the building. Multiple pulse frequencies can be used as well in the mD radar to enable different types of information to be gathered about the objects in the environment. In addition, the mD radar information can be combined with other information such as distance measurements or images captured of the environment that are analyzed jointly to provide improved object identification and improved target identification and tracking. In embodiments, the analysis can be performed on the HWC or the information can be transmitted to a remote network for analysis and results transmitted back to the HWC. Distance measurements can be provided by laser range finding, structured lighting, stereoscopic depth maps or sonar measurements. Images of the environment can be captured using one or more cameras capable of capturing images from visible, ultraviolet or infrared light. The mD radar can be attached to the HWC, located adjacently (e.g. in a vehicle) and associated wirelessly with the HWC or located remotely. Maps or other previously determined information about the environment can also be used in the analysis of the mD radar information. Embodiments of the present disclosure relate to visualizing the mD signatures in useful ways.

[000245] Figure 21 illustrates a FOV 2102 of a HWC 102 from a wearer's perspective. The wearer, as described herein elsewhere, has a see-through FOV 2102 wherein the wearer views adjacent surroundings, such as the buildings illustrated in Figure 21. The wearer, as described herein elsewhere, can also see displayed digital content presented within a portion of the FOV 2102. The embodiment illustrated in Figure 21 is indicating that the wearer can see the buildings and other surrounding elements in the environment and digital content representing traces, or travel paths, of bullets being fired by different people in the area. The surroundings are viewed through the transparency of the FOV 2102. The traces are presented via the digital computer display, as described herein elsewhere. In embodiments, the trace presented is based on a mD signature that is collected and communicated to the HWC in real time. The mD radar itself may be on or near the wearer of the HWC 102 or it may be located remote from the wearer. In embodiments, the mD radar scans the area, tracks and identifies targets, such as bullets, and communicates traces, based on locations, to the HWC 102.

[000246] There are several traces 2108 and 2104 presented to the wearer in the embodiment illustrated in Figure 21. The traces communicated from the mD radar may be associated with GPS locations and the GPS locations may be associated with objects in the environment, such as people, buildings, vehicles, etc, both in latitude and longitude perspective and an elevation perspective. The locations may be used as markers for the HWC such that the traces, as presented in the FOV, can be associated, or fixed in space relative to the markers. For example, if the friendly fire trace 2108 is determined, by the mD radar, to have originated from the upper right window of the building on the left, as illustrated in Figure 21, then a virtual marker may be set on or near the window. When the HWC views, through its camera or other sensor, for example, the building's window, the trace may then virtually anchor with the virtual marker on the window. Similarly, a marker may be set near the termination position or other flight position of the friendly fire trace 2108, such as the upper left window of the center building on the right, as illustrated in Figure 21. This technique fixes in space the trace such that the trace appears fixed to the environmental positions independent of where the wearer is looking. So, for example, as the wearer's head turns, the trace appears fixed to the marked locations.

[000247] In embodiments, certain user positions may be known and thus identified in the FOV. For example, the shooter of the friendly fire trace 2108 may be from a known friendly combatant and as such his location may be known. The position may be known based on his GPS location based on a mobile communication system on him, such as another HWC 102. In other embodiments, the friendly combatant may be marked by another friendly. For example, if the friendly position in the environment is known through visual contact or communicated information, a wearer of the HWC 102 may use a gesture or external user interface 104 to mark the location. If a friendly combatant location is known the originating position of the friendly fire trace 2108 may be color coded or otherwise distinguished from unidentified traces on the displayed digital content. Similarly, enemy fire traces 2104 may be color coded or otherwise distinguished on the displayed digital content. In embodiments, there may be an additional distinguished appearance on the displayed digital content for unknown traces.

[000248] In addition to situationally associated trace appearance, the trace colors or appearance may be different from the originating position to the terminating position. This path appearance change may be based on the mD signature. The mD signature may indicate that the bullet, for example, is slowing as it propagates and this slowing pattern may be reflected in the FOV 2102 as a color or pattern change. This can create an intuitive understanding of wear the shooter is located. For example, the originating color may be red, indicative of high speed, and it may change over the course of the trace to yellow, indicative of a slowing trace. This pattern changing may also be different for a friendly, enemy and unknown combatant. The enemy may go blue to green for a friendly trace, for example.

[000249] Figure 21 illustrates an embodiment where the user sees the environment through the FOV and may also see color coded traces, which are dependent on bullet speed and combatant type, where the traces are fixed in environmental positions independent on the wearer's perspective. Other information, such as distance, range, range rings, time of day, date, engagement type (e.g. hold, stop firing, back away, etc.) may also be displayed in the FOV.

[000250] Another aspect of the present disclosure relates to mD radar techniques that trace and identify targets through other objects, such as walls (referred to generally as through wall mD), and visualization techniques related therewith. Figure 22 illustrates a through wall mD visualization technique according to the principles of the present disclosure. As described herein elsewhere, the mD radar scanning the environment may be local or remote from the wearer of a HWC 102. The mD radar may identify a target (e.g. a person) that is visible 2204 and then track the target as he goes behind a wall 2208. The tracking may then be presented to the wearer of a HWC 102 such that digital content reflective of the target and the target's movement, even behind the wall, is presented in the FOV 2202 of the HWC 102. In embodiments, the target, when out of visible sight, may be represented by an avatar in the FOV to provide the wearer with imagery representing the target.

[000251] mD target recognition methods can identify the identity of a target based on the vibrations and other small movements of the target. This can provide a personal signature for the target. In the case of humans, this may result in a personal identification of a target that has been previously characterized. The cardio, heartbeat, lung expansion and other small movements within the body may be unique to a person and if those attributes are pre- identified they may be matched in real time to provide a personal identification of a person in the FOV 2202. The person's mD signatures may be determined based on the position of the person. For example, the database of personal mD signature attributes may include mD signatures for a person standing, sitting, laying down, running, walking, jumping, etc. This may improve the accuracy of the personal data match when a target is tracked through mD signature techniques in the field. In the event a person is personally identified, a specific indication of the person' s identity may be presented in the FOV 2202. The indication may be a color, shape, shade, name, indication of the type of person (e.g. enemy, friendly, etc.), etc. to provide the wearer with intuitive real time information about the person being tracked. This may be very useful in a situation where there is more than one person in an area of the person being tracked. If just one person in the area is personally identified, that person or the avatar of that person can be presented differently than other people in the area.

[000252] Figure 23 illustrates an mD scanned environment 2300. An mD radar may scan an environment in an attempt to identify objects in the environment. In this

embodiment, the mD scanned environment reveals two vehicles 2302a and 2302b, en enemy combatant 2309, two friendly combatants 2308a and 2308b and a shot trace 2318. Each of these objects may be personally identified or type identified. For example, the vehicles 2302a and 2302b may be identified through the mD signatures as a tank and heavy truck. The enemy combatant 2309 may be identified as a type (e.g. enemy combatant) or more personally (e.g. by name). The friendly combatants may be identified as a type (e.g. friendly combatant) or more personally (e.g. by name). The shot trace 2318 may be characterized by type of projectile or weapon type for the projectile, for example.

[000253] Figure 23a illustrates two separate HWC 102 FOV display techniques according to the principles of the present disclosure. FOV 2312 illustrates a map view 2310 where the mD scanned environment is presented. Here, the wearer has a perspective on the mapped area so he can understand all tracked targets in the area. This allows the wearer to traverse the area with knowledge of the targets. FOV 2312 illustrates a heads-up view to provide the wearer with an augmented reality style view of the environment that is in proximity of the wearer.

[000254] An aspect of the present disclosure relates to suppression of extraneous or stray light. As discussed herein elsewhere, eyeglow and faceglow are two such artifacts that develop from such light. Eyeglow and faceglow can be caused by image light escaping from the optics module. The escaping light is then visible, particularly in dark environments when the user is viewing bright displayed images with the HWC. Light that escapes through the front of the HWC is visible as eyeglow as it that light that is visible in the region of the user's eyes. Eyeglow can appear in the form of a small version of the displayed image that the user is viewing. Light that escapes from the bottom of the HWC shines onto the user's face, cheek or chest so that these portions of the user appear to glow. Eyeglow and faceglow can both increase the visibility of the user and highlight the use of the HWC, which may be viewed negatively by the user. As such, reducing eyeglow and faceglow is advantageous. In combat situations (e.g. the mD trace presentation scenerios described herein) and certain gaming situations, the suppression of extraneous or stray light is very important.

[000255] The disclosure relating to Figure 6 shows an example where a portion of the image light passes through the combiner 602 such that the light shines onto the user's face, thereby illuminating a portion of the user's face in what is generally referred to herein as faceglow. Faceglow be caused by any portion of light from the HWC that illuminates the user' s face.

[000256] An example of the source for the faceglow light can come from wide cone angle light associated with the image light incident onto the combiner 602. Where the combiner can include a holographic mirror or a notch mirror in which the narrow bands of high reflectivity are matched to wavelengths of light by the light source. The wide cone angle associated with the image light corresponds with the field of view provided by the HWC. Typically the reflectivity of holographic mirrors and notch mirrors is reduced as the cone angle of the incident light is increased above 8 degrees. As a result, for a a field of view of 30 degrees, substantial image light can pass through the combiner and cause faceglow.

[000257] Figure 24 shows an illustration of a light trap 2410 for the faceglow light. In this embodiment, an extension of the outer shield len of the HWC is coated with a light absorbing material in the region where the converging light responsible for faceglow is absorbed in a light trap 2410. The light absorbing material can be black or it can be a filter designed to absorb only the specific wavelengths of light provided by the light source(s) in the HWC. In addition, the surface of the light trap 2410 may be textured or fibrous to further improve the absorption.

[000258] Figure 25 illustrates an optical system for a HWC that includes an outer absorptive polarizer 2520 to block the faceglow light. In this embodiment, the image light is polarized and as a result the light responsible for faceglow is similarly polarized. The absorptive polarizer is oriented with a transmission axis such that the faceglow light is absorbed and not transmitted. In this case, the rest of the imaging system in the HWC may not require polarized image light and the image light may be polarized at any point before the combiner. In embodiments, the transmission axis of the absorptive polarizer 2520 is oriented vertically so that external glare from water (S polarized light) is absorbed and

correspondingly, the polarization of the image light is selected to be horizontal (S

polarization). Consequently, image light that passes through the combiner 602 and is then incident onto the absorptive polarizer 2520, is absorbed. In Figure 25 the absorptive polarizer 2520 is shown outside the shield lens, alternatively the absorptive polarizer 2520 can be located inside the shield lens.

[000259] Figure 26 illustrates an optical system for a HWC that includes a film with an absorptive notch filter 2620. In this case, the absorptive notch filter absorbs narrow bands of light that are selected to match the light provided by the optical system's light source. As a result, the absorptive notch filter is opaque with respect to the faceglow light and is transparent to the remainder of the wavelengths included in the visible spectrum so that the user has a clear view of the surrounding environment. A triple notch filter suitable for this approach is available from Iridian Spectral Technologies, Ottawa, ON:

http://www.ilphotonics.com/cdv2/Iridian-

Interference%20Filters/New%20filters/Triple%20Notch%20Fil ter.pdf

[000260] In embodiments, the combiner 602 may include a notch mirror coating to reflect the wavelengths of light in the image light and a notch filter 2620 can be selected in correspondence to the wavelengths of light provided by the light source and the narrow bands of high reflectivity provided by the notch mirror. In this way, image light that is not reflected by the notch mirror is absorbed by the notch filter 2620. In embodiments of the disclosure the light source can provide one narrow band of light for a monochrome imaging or three narrow bands of light for full color imaging. The notch mirror and associated notch filter would then each provide one narrow band or three narrow bands of high reflectivity and absorption respectively. [000261] Figure 27 includes a microlouver film 2750 to block the faceglow light. Microlouver film is sold by 3M as ALCF-P, for example and is typically used as a privacy filter for computer. See

http://multimedia.3m.com/mws/mediawebserver?mwsId=SSSSSuH 8gc7nZxtUoY_xlY_eevU qel7zHvTSevTSeSSSSSS-&fn=ALCF-P_ABR2_Control_Film_DS.pdf The microlouver film transmits light within a somewhat narrow angle (e.g. 30 degrees of normal and absorbs light beyond 30 degrees of normal). In Figure 27, the microlouver film 2750 is positioned such that the faceglow light 2758 is incident beyond 30 degrees from normal while the see- through light 2755 is incident within 30 degrees of normal to the microlouver film 2750. As such, the faceglow light 2758 is absorbed by the microlouver film and the see-through light 2755 is transmitted so that the user has a bright see-thru view of the surrounding

environment.

[000262] We now turn back to a description of eye imaging technologies. Aspects of the present disclosure relate to various methods of imaging the eye of a person wearing the HWC 102. In embodiments, technologies for imaging the eye using an optical path involving the "off state and "no power" state, which is described in detail below, are described. In embodiments, technologies for imaging the eye with optical configurations that do not involve reflecting the eye image off of DLP mirrors is described. In embodiments, unstructured light, structured light, or controlled lighting conditions, are used to predict the eye's position based on the light reflected off of the front of the wearer's eye. In

embodiments, a reflection of a presented digital content image is captured as it reflects off of the wearer' s eye and the reflected image may be processed to determine the quality (e.g. sharpness) of the image presented. In embodiments, the image may then be adjusted (e.g. focused differently) to increase the quality of the image presented based on the image reflection.

[000263] Figures 28a, 28b and 28c show illustrations of the various positions of the DLP mirrors. Figure 28a shows the DLP mirrors in the "on" state 2815. With the mirror in the "on" state 2815, illumination light 2810 is reflected along an optical axis 2820 that extends into the lower optical module 204. Figure 28b shows the DLP mirrors in the "off state 2825. With the mirror in the "off state 2825, illumination light 2810 is reflected along an optical axis 2830 that is substantially to the side of optical axis 2820 so that the "off state light is directed toward a dark light trap as has been described herein elsewhere. Figure 28c shows the DLP mirrors in a third position, which occurs when no power is applied to the DLP. This "no power" state differs from the "on" and "off states in that the mirror edges are not in contact with the substrate and as such are less accurately positioned. Figure 28c shows all of the DLP mirrors in the "no power" state 2835. The "no power" state is achieved by simultaneously setting the voltage to zero for the "on" contact and "off contact for a DLP mirror, as a result, the mirror returns to a no stress position where the DLP mirror is in the plane of the DLP platform as shown in Figure 28c. Although not normally done, it is also possible to apply the "no power" state to individual DLP mirrors. When the DLP mirrors are in the "no power" state they do not contribute image content. Instead, as shown in Figure 28c, when the DLP mirrors are in the "no power" state, the illumination light 2810 is reflected along an optical axis 2840 that is between the optical axes 2820 and 2830 that are respectively associated with the "on" and "off states and as such this light doesn't contribute to the displayed image as a bright or dark pixel. This light can however contribute scattered light into the lower optical module 204 and as a result the displayed image contrast can be reduced or artifacts can be created in the image that detract from the image content.

Consequently, it is generally desirable, in embodiments, to limit the time associated with the "no power" state to times when images are not displayed or to reduce the time associated with having DLP mirrors in the "no power" state so that the effect of the scattered light is reduced.

[000264] Figure 29 shows an embodiment of the disclosure that can be used for displaying digital content images to a wearer of the HWC 102 and capturing images of the wearer's eye. In this embodiment, light from the eye 2971 passes back through the optics in the lower module 204, the solid corrective wedge 2966, at least a portion of the light passes through the partially reflective layer 2960, the solid illumination wedge 2964 and is reflected by a plurality of DLP mirrors on the DLP 2955 that are in the "no power" state. The reflected light then passes back through the illumination wedge 2964 and at least a portion of the light is reflected by the partially reflective layer 2960 and the light is captured by the camera 2980.

[000265] For comparison, illuminating light rays 2973 from the light source 2958 are also shown being reflected by the partially reflective layer 2960. Where the angle of the illuminating light 2973 is such that the DLP mirrors, when in the "on" state, reflect the illuminating light 2973 to form image light 2969 that substantially shares the same optical axis as the light from the wearer's eye 2971. In this way, images of the wearer's eye are captured in a field of view that overlaps the field of view for the displayed image content. In contrast, light reflected by DLP mirrors in the "off state form dark light 2975 which is directed substantially to the side of the image light 2969 and the light from eye 2971. Dark light 2975 is directed toward a light trap 2962 that absorbs the dark light to improve the contrast of the displayed image as has been described above in this specification.

[000266] In an embodiment, partially reflective layer 2960 is a reflective polarizer. The light that is reflected from the eye 2971 can then be polarized prior to entering the corrective wedge 2966 (e.g. with an absorptive polarizer between the upper module 202 and the lower module 204), with a polarization orientation relative to the reflective polarizer that enables the light reflected from the eye 2971 to substantially be transmitted by the reflective polarizer. A quarter wave retarder layer 2957 is then included adjacent to the DLP 2955 (as previously disclosed in Figure 3b) so that the light reflected from the eye 2971 passes through the quarter wave retarder layer 2957 once before being reflected by the plurality of DLP mirrors in the "no power" state and then passes through a second time after being reflected. By passing through the quarter wave retarder layer 2957 twice, the polarization state of the light from the eye 2971 is reversed, such that when it is incident upon the reflective polarizer, the light from the eye 2971 is then substantially reflected toward the camera 2980. By using a partially reflective layer 2960 that is a reflective polarizer and polarizing the light from the eye 2971 prior to entering the corrective wedge 2964, losses attributed to the partially reflective layer 2960 are reduced.

[000267] Figure 28c shows the case wherein the DLP mirrors are simultaneously in the "no power" state, this mode of operation can be particularly useful when the HWC 102 is first put onto the head of the wearer. When the HWC 102 is first put onto the head of the wearer, it is not necessary to display an image yet. As a result, the DLP can be in a "no power" state for all the DLP mirrors and an image of the wearer' s eyes can be captured. The captured image of the wearer' s eye can then be compared to a database, using iris identification techniques, or other eye pattern identification techniques to determine, for example, the identity of the wearer.

[000268] In a further embodiment illustrated by Figure 29 all of the DLP mirrors are put into the "no power" state for a portion of a frame time (e.g. 50% of a frame time for the displayed digital content image) and the capture of the eye image is synchronized to occur at the same time and for the same duration. By reducing the time that the DLP mirrors are in the "no power" state, the time where light is scattered by the DLP mirrors being in the "no power" state is reduced such that the wearer doesn't perceive a change in the displayed image quality. This is possible because the DLP mirrors have a response time on the order of microseconds while typical frame times for a displayed image are on the order of 0.016 seconds. This method of capturing images of the wearer's eye can be used periodically to capture repetitive images of the wearer' s eye. For example, eye images could be captured for 50% of the frame time of every 10th frame displayed to the wearer. In another example, eye images could be captured for 10% of the frame time of every frame displayed to the wearer.

[000269] Alternately, the "no power" state can be applied to a subset of the DLP mirrors (e.g. 10% of the DLP mirrors) within while another subset is in busy generating image light for content to be displayed. This enables the capture of an eye image(s) during the display of digital content to the wearer. The DLP mirrors used for eye imaging can, for example, be distributed randomly across the area of the DLP to minimize the impact on the quality of the digital content being displayed to the wearer. To improve the displayed image perceived by the wearer, the individual DLP mirrors put into the "no power" state for capturing each eye image, can be varied over time such as in a random pattern, for example. In yet a further embodiment, the DLP mirrors put into the "no power" state for eye imaging may be coordinated with the digital content in such a way that the "no power" mirrors are taken from a portion of the image that requires less resolution.

[000270] In the embodiments of the disclosure as illustrated in Figures 9 and 29, in both cases the reflective surfaces provided by the DLP mirrors do not preserve the wavefront of the light from the wearer's eye so that the image quality of captured image of the eye is somewhat limited. It may still be useful in certain embodiments, but it is somewhat limited. This is due to the DLP mirrors not being constrained to be on the same plane. In the embodiment illustrated in Figure 9, the DLP mirrors are tilted so that they form rows of DLP mirrors that share common planes. In the embodiment illustrated in Figure 29, the individual DLP mirrors are not accurately positioned to be in the same plane since they are not in contact with the substrate. Examples of advantages of the embodiments associated with Figure 29 are: first, the camera 2980 can be located between the DLP 2955 and the illumination light source 2958 to provide a more compact upper module 202. Second, the polarization state of the light reflected from the eye 2971 can be the same as that of the image light 2969 so that the optical path of the light reflected from the eye and the image light can be the same in the lower module 204.

[000271] Figure 30 shows an illustration of an embodiment for displaying images to the wearer and simultaneously capturing images of the wearer's eye, wherein light from the eye 2971 is reflected towards a camera 3080 by the partially reflective layer 2960. The partially reflective layer 2960 can be an optically flat layer such that the wavefront of the light from the eye 2971 is preserved and as a result, higher quality images of the wearer's eye can be captured. In addition, since the DLP 2955 is not included in the optical path for the light from the eye 2971, and the eye imaging process shown in Figure 30 does not interfere with the displayed image, images of the wearer's eye can be captured independently (e.g. with independent of timing, impact on resolution, or pixel count used in the image light) from the displayed images.

[000272] In the embodiment illustrated in Figure 30, the partially reflective layer 2960 is a reflective polarizer, the illuminating light 2973 is polarized, the light from the eye 2971 is polarized and the camera 3080 is located behind a polarizer 3085. The polarization axis of the illuminating light 2973 and the polarization axis of the light from the eye are oriented perpendicular to the transmission axis of the reflective polarizer so that they are both substantially reflected by the reflective polarizer. The illumination light 2973 passes through a quarter wave layer 2957 before being reflected by the DLP mirrors in the DLP 2955. The reflected light passes back through the quarter wave layer 2957 so that the polarization states of the image light 2969 and dark light 2975 are reversed in comparison to the illumination light 2973. As such, the image light 2969 and dark light 2975 are substantially transmitted by the reflective polarizer. Where the DLP mirrors in the "on" state provide the image light 2969 along an optical axis that extends into the lower optical module 204 to display an image to the wearer. At the same time, DLP mirrors in the "off state provide the dark light 2975 along an optical axis that extends to the side of the upper optics module 202. In the region of the corrective wedge 2966 where the dark light 2975 is incident on the side of the upper optics module 202, an absorptive polarizer 3085 is positioned with its transmission axis perpendicular to the polarization axis of the dark light and parallel to the polarization axis of the light from the eye so that the dark light 2975 is absorbed and the light from the eye 2971 is transmitted to the camera 3080.

[000273] Figure 31 shows an illustration of another embodiment of a system for displaying images and simultaneously capturing image of the wearer' s eye that is similar to the one shown in Figure 30. The difference in the system shown in Figure 31 is that the light from the eye 2971 is subjected to multiple reflections before being captured by the camera 3180. To enable the multiple reflections, a mirror 3187 is provided behind the absorptive polarizer 3185. Therefore, the light from the eye 2971 is polarized prior to entering the corrective wedge 2966 with a polarization axis that is perpendicular to the transmission axis of the reflective polarizer that comprises the partially reflective layer 2960. In this way, the light from the eye 2971 is reflected first by the reflective polarizer, reflected second by the mirror 3187 and reflected third by the reflective polarizer before being captured by the camera 3180. While the light from the eye 2971 passes through the absorptive polarizer 3185 twice, since the polarization axis of the light from the eye 2971 is oriented parallel to the polarization axis of the light from the eye 2971, it is substantially transmitted by the absorptive polarizer 3185. As with the system described in connection with Figure 30, the system shown in Figure 31 includes an optically flat partially reflective layer 2960 that preserves the wavefront of the light from the eye 2971 so that higher quality images of the wearer's eye can be captured. Also, since the DLP 2955 is not included in the optical path for the light reflected from the eye 2971 and the eye imaging process shown in Figure 31 does not interfere with the displayed image, images of the wearer's eye can be captured independently from the displayed images.

[000274] Figure 32 shows an illustration of a system for displaying images and simultaneously capturing images of the wearer's eye that includes a beam splitter plate 3212 comprised of a reflective polarizer, which is held in air between the light source 2958, the DLP 2955 and the camera 3280. The illumination light 2973 and the light from the eye 2971 are both polarized with polarization axes that are perpendicular to the transmission axis of the reflective polarizer. As a result, both the illumination light 2973 and the light from the eye 2971 are substantially reflected by the reflective polarizer. The illumination light 2873 is reflected toward the DLP 2955 by the reflective polarizer and split into image light 2969 and dark light 3275 depending on whether the individual DLP mirrors are respectively in the "on" state or the "off state. By passing through the quarter wave layer 2957 twice, the polarization state of the illumination light 2973 is reversed in comparison to the polarization state of the image light 2969 and the dark light 3275. As a result, the image light 2969 and the dark light 3275 are then substantially transmitted by the reflective polarizer. The absorptive polarizer 3285 at the side of the beam splitter plate 3212 has a transmission axis that is perpendicular to the polarization axis of the dark light 3275 and parallel to the polarization axis of the light from the eye 2971 so that the dark light 3275 is absorbed and the light from the eye 2971 is transmitted to the camera 3280. As in the system shown in Figure 30, the system shown in Figure 31 includes an optically flat beam splitter plate 3212 that preserves the wavefront of the light from the eye 2971 so that higher quality images of the wearer's eye can be captured. Also, since the DLP 2955 is not included in the optical path for the light from the eye 2971 and the eye imaging process shown in Figure 31 does not interfere with the displayed image, images of the wearer' s eye can be captured independently from the displayed images.

[000275] Eye imaging systems where the polarization state of the light from the eye 2971 needs to be opposite to that of the image light 2969 (as shown in Figures 30, 31 and 32), need to be used with lower modules 204 that include combiners that will reflect both polarization states. As such, these upper modules 202 are best suited for use with the lower modules 204 that include combiners that are reflective regardless of polarization state, examples of these lower modules are shown in Figures 6, 8a, 8b, 8c and 24-27.

[000276] In a further embodiment shown in Figure 33, the partially reflective layer 3360 is comprised of a reflective polarizer on the side facing the illumination light 2973 and a short pass dichroic mirror on the side facing the light from the eye 3371 and the camera 3080. Where the short pass dichroic mirror is a dielectric mirror coating that transmits visible light and reflects infrared light. The partially reflective layer 3360 can be comprised of a reflective polarizer bonded to the inner surface of the illumination wedge 2964 and a short pass dielectric mirror coating on the opposing inner surface of the corrective wedge 2966, wherein the illumination wedge 2964 and the corrective wedge 2966 are then optically bonded together. Alternatively, the partially reflective layer 3360 can be comprised of a thin substrate that has a reflective polarizer bonded to one side and a short pass dichroic mirror coating on the other side, where the partially reflective layer 3360 is then bonded between the illumination wedge 2964 and the corrective wedge 2966. In this embodiment, an infrared light is included to illuminate the eye so that the light from the eye and the images captured of the eye are substantially comprised of infrared light. The wavelength of the infrared light is then matched to the reflecting wavelength of the shortpass dichroic mirror and the wavelength that the camera can capture images, for example an 800nm wavelength can be used. In this way, the short pass dichroic mirror transmits the image light and reflects the light from the eye. The camera 3080 is then positioned at the side of the corrective wedge 2966 in the area of the absorbing light trap 3382, which is provided to absorb the dark light 2975. By positioning the camera 3080 in a depression in the absorbing light trap 3382, scattering of the dark light 2975 by the camera 3080 can be reduced so that higher contrast images can be displayed to the wearer. An advantage of this embodiment is that the light from the eye need not be polarized, which can simplify the optical system and increase efficiency for the eye imaging system.

[000277] In yet another embodiment shown in Figure 32a a beam splitter plate 3222 is comprised of a reflective polarizer on the side facing the illumination light 2973 and a short pass dichroic mirror on the side facing the light from the eye 3271 and the camera 3280. An absorbing surface 3295 is provided to trap the dark light 3275 and the camera 3280 is positioned in an opening in the absorbing surface 3295. In this way the system of Figure 32 can be made to function with unpolarized light from the eye 3271. [000278] In embodiments directed to capturing images of the wearer's eye, light to illuminate the wearer's eye can be provided by several different sources including: light from the displayed image (i.e. image light); light from the environment that passes through the combiner or other optics; light provided by a dedicated eye light, etc. Figures 34 and 34a show illustrations of dedicated eye illumination lights 3420. Figure 34 shows an illustration from a side view in which the dedicated illumination eye light 3420 is positioned at a corner of the combiner 3410 so that it doesn't interfere with the image light 3415. The dedicated eye illumination light 3420 is pointed so that the eye illumination light 3425 illuminates the eyebox 3427 where the eye 3430 is located when the wearer is viewing displayed images provided by the image light 3415. Figure 34a shows an illustration from the perspective of the eye of the wearer to show how the dedicated eye illumination light 3420 is positioned at the corner of the combiner 3410. While the dedicated eye illumination light 3420 is shown at the upper left corner of the combiner 3410, other positions along one of the edges of the combiner 3410, or other optical or mechanical components, are possible as well. In other embodiments, more than one dedicated eye light 3420 with different positions can be used. In an embodiment, the dedicated eye light 3420 is an infrared light that is not visible by the wearer (e.g. 800 nm) so that the eye illumination light 3425 doesn't interfere with the displayed image perceived by the wearer.

[000279] Figure 35 shows a series of illustrations of captured eye images that show the eye glint (i.e. light that reflects off the front of the eye) produced by a dedicated eye light. In this embodiment of the disclosure, captured images of the wearer's eye are analyzed to determine the relative positions of the iris 3550, pupil, or other portion of the eye, and the eye glint 3560. The eye glint is a reflected image of the dedicated eye light 3420 when the dedicated light is used. Figure 35 illustrates the relative positions of the iris 3550 and the eye glint 3560 for a variety of eye positions. By providing a dedicated eye light 3420 in a fixed position, combined with the fact that the human eye is essentially spherical, or at least a reliably repeatable shape, the eye glint provides a fixed reference point against which the determined position of the iris can be compared to determine where the wearer is looking, either within the displayed image or within the see-through view of the surrounding environment. By positioning the dedicated eye light 3420 at a corner of the combiner 3410, the eye glint 3560 is formed away from the iris 3550 in the captured images. As a result, the positions of the iris and the eye glint can be determined more easily and more accurately during the analysis of the captured images, since they do not interfere with one another. In a further embodiment, the combiner includes an associated cut filter that prevents infrared light from the environment from entering the HWC and the camera is an infrared camera, so that the eye glint is only provided by light from the dedicated eye light. For example, the combiner can include a low pass filter that passes visible light while absorbing infrared light and the camera can include a high pass filter that absorbs visible light while passing infrared light.

[000280] In an embodiment of the eye imaging system, the lens for the camera is designed to take into account the optics associated with the upper module 202 and the lower module 204. This is accomplished by designing the camera to include the optics in the upper module 202 and optics in the lower module 204, so that a high MTF image is produced, at the image sensor in the camera, of the wearer' s eye. In yet a further embodiment, the camera lens is provided with a large depth of field to eliminate the need for focusing the camera to enable sharp image of the eye to be captured. Where a large depth of field is typically provided by a high f/# lens (e.g. f/# >5). In this case, the reduced light gathering associated with high f/# lenses is compensated by the inclusion of a dedicated eye light to enable a bright image of the eye to be captured. Further, the brightness of the dedicated eye light can be modulated and synchronized with the capture of eye images so that the dedicated eye light has a reduced duty cycle and the brightness of infrared light on the wearer's eye is reduced.

[000281] In a further embodiment, Figure 36a shows an illustration of an eye image that is used to identify the wearer of the HWC. In this case, an image of the wearer' s eye 3611 is captured and analyzed for patterns of identifiable features 3612. The patterns are then compared to a database of eye images to determine the identity of the wearer. After the identity of the wearer has been verified, the operating mode of the HWC and the types of images, applications, and information to be displayed can be adjusted and controlled in correspondence to the determined identity of the wearer. Examples of adjustments to the operating mode depending on who the wearer is determined to be or not be include: making different operating modes or feature sets available, shutting down or sending a message to an external network, allowing guest features and applications to run, etc.

[000282] is an illustration of another embodiment using eye imaging, in which the sharpness of the displayed image is determined based on the eye glint produced by the reflection of the displayed image from the wearer's eye surface. By capturing images of the wearer's eye 3611, an eye glint 3622, which is a small version of the displayed image can be captured and analyzed for sharpness. If the displayed image is determined to not be sharp, then an automated adjustment to the focus of the HWC optics can be performed to improve the sharpness. This ability to perform a measurement of the sharpness of a displayed image at the surface of the wearer' s eye can provide a very accurate measurement of image quality. Having the ability to measure and automatically adjust the focus of displayed images can be very useful in augmented reality imaging where the focus distance of the displayed image can be varied in response to changes in the environment or changes in the method of use by the wearer.

[000283] An aspect of the present disclosure relates to controlling the HWC 102 through interpretations of eye imagery. In embodiments, eye-imaging technologies, such as those described herein, are used to capture an eye image or series of eye images for processing. The image(s) may be process to determine a user intended action, an HWC predetermined reaction, or other action. For example, the imagery may be interpreted as an affirmative user control action for an application on the HWC 102. Or, the imagery may cause, for example, the HWC 102 to react in a pre-determined way such that the HWC 102 is operating safely, intuitively, etc.

[000284] Figure 37 illustrates a eye imagery process that involves imaging the HWC 102 wearer's eye(s) and processing the images (e.g. through eye imaging technologies described herein) to determine in what position 3702 the eye is relative to it's neutral or forward looking position and/or the FOV 3708. The process may involve a calibration step where the user is instructed, through guidance provided in the FOV of the HWC 102, to look in certain directions such that a more accurate prediction of the eye position relative to areas of the FOV can be made. In the event the wearer's eye is determined to be looking towards the right side of the FOV 3708 (as illustrated in Figure 37, the eye is looking out of the page) a virtual target line may be established to project what in the environment the wearer may be looking towards or at. The virtual target line may be used in connection with an image captured by camera on the HWC 102 that images the surrounding environment in front of the wearer. In embodiments, the field of view of the camera capturing the surrounding environment matches, or can be matched (e.g. digitally), to the FOV 3708 such that making the comparison is made more clear. For example, with the camera capturing the image of the surroundings in an angle that matches the FOV 3708 the virtual line can be processed (e.g. in 2d or 3d, depending on the camera images capabilities and/or the processing of the images) by projecting what surrounding environment objects align with the virtual target line. In the event there are multiple objects along the virtual target line, focal planes may be established corresponding to each of the objects such that digital content may be placed in an area in the FOV 3708 that aligns with the virtual target line and falls at a focal plane of an intersecting object. The user then may see the digital content when he focuses on the object in the environment, which is at the same focal plane. In embodiments, objects in line with the virtual target line may be established by comparison to mapped information of the surroundings.

[000285] In embodiments, the digital content that is in line with the virtual target line may not be displayed in the FOV until the eye position is in the right position. This may be a predetermined process. For example, the system may be set up such that a particular piece of digital content (e.g. an advertisement, guidance information, object information, etc.) will appear in the event that the wearer looks at a certain object(s) in the environment. A virtual target line(s) may be developed that virtually connects the wearer's eye with an object(s) in the environment (e.g. a building, portion of a building, mark on a building, gps location, etc.) and the virtual target line may be continually updated depending on the position and viewing direction of the wearer (e.g. as determined through GPS, e-compass, IMU, etc.) and the position of the object. When the virtual target line suggests that the wearer's pupil is substantially aligned with the virtual target line or about to be aligned with the virtual target line, the digital content may be displayed in the FOV 3704.

[000286] In embodiments, the time spent looking along the virtual target line and/or a particular portion of the FOV 3708 may indicate that the wearer is interested in an object in the environment and/or digital content being displayed. In the event there is no digital content being displayed at the time a predetermined period of time is spent looking at a direction, digital content may be presented in the area of the FOV 3708. The time spent looking at an object may be interpreted as a command to display information about the object, for example. In other embodiments, the content may not relate to the object and may be presented because of the indication that the person is relatively inactive. In embodiments, the digital content may be positioned in proximity to the virtual target line, but not in-line with it such that the wearer' s view of the surroundings are not obstructed but information can augment the wearer's view of the surroundings. In embodiments, the time spent looking along a target line in the direction of displayed digital content may be an indication of interest in the digital content. This may be used as a conversion event in advertising. For example, an advertiser may pay more for an add placement if the wearer of the HWC 102 looks at a displayed advertisement for a certain period of time. As such, in embodiments, the time spent looking at the advertisement, as assessed by comparing eye position with the content placement, target line or other appropriate position may be used to determine a rate of conversion or other compensation amount due for the presentation. [000287] An aspect of the disclosure relates to removing content from the FOV of the HWC 102 when the wearer of the HWC 102 apparently wants to view the surrounding environments clearly. Figure 38 illustrates a situation where eye imagery suggests that the eye has or is moving quickly so the digital content 3804 in the FOV 3808 is removed from the FOV 3808. In this example, the wearer may be looking quickly to the side indicating that there is something on the side in the environment that has grabbed the wearer's attention. This eye movement 3802 may be captured through eye imaging techniques (e.g. as described herein) and if the movement matches a predetermined movement (e.g. speed, rate, pattern, etc.) the content may be removed from view. In embodiments, the eye movement is used as one input and HWC movements indicated by other sensors (e.g. IMU in the HWC) may be used as another indication. These various sensor movements may be used together to project an event that should cause a change in the content being displayed in the FOV.

[000288] Another aspect of the present disclosure relates to determining a focal plane based on the wearer's eye convergence. Eyes are generally converged slightly and converge more when the person focuses on something very close. This is generally referred to as convergence. In embodiments, convergence is calibrated for the wearer. That is, the wearer may be guided through certain focal plane exercises to determine how much the wearer's eyes converge at various focal planes and at various viewing angles. The convergence information may then be stored in a database for later reference. In embodiments, a general table may be used in the event there is no calibration step or the person skips the calibration step. The two eyes may then be imaged periodically to determine the convergence in an attempt to understand what focal plane the wearer is focused on. In embodiments, the eyes may be imaged to determine a virtual target line and then the eye's convergence may be determined to establish the wearer's focus, and the digital content may be displayed or altered based thereon.

[000289] Figure 39 illustrates a situation where digital content is moved 3902 within one or both of the FOVs 3908 and 3910 to align with the convergence of the eyes as determined by the pupil movement 3904. By moving the digital content to maintain alignment, in embodiments, the overlapping nature of the content is maintained so the object appears properly to the wearer. This can be important in situations where 3D content is displayed.

[000290] An aspect of the present disclosure relates to controlling the HWC 102 based on events detected through eye imaging. A wearer winking, blinking, moving his eyes in a certain pattern, etc. may, for example, control an application of the HWC 102. Eye imaging (e.g. as described herein) may be used to monitor the eye(s) of the wearer and once a predetermined pattern is detected an application control command may be initiated.

[000291] An aspect of the disclosure relates to monitoring the health of a person wearing a HWC 102 by monitoring the wearer's eye(s). Calibrations may be made such that the normal performance, under various conditions (e.g. lighting conditions, image light conditions, etc.) of a wearer's eyes may be documented. The wearer's eyes may then be monitored through eye imaging (e.g. as described herein) for changes in their performance. Changes in performance may be indicative of a health concern (e.g. concussion, brain injury, stroke, loss of blood, etc.). If detected the data indicative of the change or event may be communicated from the HWC 102.

[000292] Aspects of the present disclosure relate to security and access of computer assets (e.g. the HWC itself and related computer systems) as determined through eye image verification. As discussed herein elsewhere, eye imagery may be compared to known person eye imagery to confirm a person' s identity. Eye imagery may also be used to confirm the identity of people wearing the HWCs 102 before allowing them to link together or share files, streams, information, etc.

[000293] A variety of use cases for eye imaging are possible based on technologies described herein. An aspect of the present disclosure relates to the timing of eye image capture. The timing of the capture of the eye image and the frequency of the capture of multiple images of the eye can vary dependent on the use case for the information gathered from the eye image. For example, capturing an eye image to identify the user of the HWC may be required only when the HWC has been turned ON or when the HWC determines that the HWC has been put onto a wearer' s head, to control the security of the HWC and the associated information that is displayed to the user. Wherein, the orientation, movement pattern, stress or position of the earhorns (or other portions of the HWC) of the HWC can be used to determine that a person has put the HWC onto their head with the intention to use the HWC. Those same parameters may be monitored in an effort to understand when the HWC is dismounted from the user's head. This may enable a situation where the capture of an eye image for identifying the wearer may be completed only when a change in the wearing status is identified. In a contrasting example, capturing eye images to monitor the health of the wearer may require images to be captured periodically (e.g. every few seconds, minutes, hours, days, etc.). For example, the eye images may be taken in minute intervals when the images are being used to monitor the health of the wearer when detected movements indicate that the wearer is exercising. In a further contrasting example, capturing eye images to monitor the health of the wearer for long-term effects may only require that eye images be captured monthly. Embodiments of the disclosure relate to selection of the timing and rate of capture of eye images to be in correspondence with the selected use scenario associated with the eye images. These selections may be done automatically, as with the exercise example above where movements indicate exercise, or these selections may be set manually. In a further embodiment, the selection of the timing and rate of eye image capture is adjusted automatically depending on the mode of operation of the HWC. The selection of the timing and rate of eye image capture can further be selected in correspondence with input characteristics associated with the wearer including age and health status, or sensed physical conditions of the wearer including heart rate, chemical makeup of the blood and eye blink rate.

[000294] Figure 40 illustrates an embodiment in which digital content presented in a see-through FOV is positioned based on the speed in which the wearer is moving. When the person is not moving, as measured by sensor(s) in the HWC 102 (e.g. IMU, GPS based tracking, etc.), digital content may be presented at the stationary person content position 4004. The content position 4004 is indicated as being in the middle of the see-through FOV 4002; however, this is meant to illustrate that the digital content is positioned within the see- through FOV at a place that is generally desirable knowing that the wearer is not moving and as such the wearer's surrounding see through view can be somewhat obstructed. So, the stationary person content position, or neutral position, may not be centered in the see-through FOV; it may be positioned somewhere in the see-through FOV deemed desirable and the sensor feedback may shift the digital content from the neutral position. The movement of the digital content for a quickly moving person is also shown in Figure 40 wherein as the person turns their head to the side, the digital content moves out of the see-through FOV to content position 4008 and then moves back as the person turns their head back. For a slowly moving person, the head movement can be more complex and as such the movement of the digital content in an out of the see-through FOV can follow a path such as that shown by content position 4010.

[000295] In embodiments, the sensor that assesses the wearer's movements may be a GPS sensor, IMU, accelerometer, etc. The content position may be shifted from a neutral position to a position towards a side edge of the field of view as the forward motion increases. The content position may be shifted from a neutral position to a position towards a top or bottom edge of the field of view as the forward motion increases. The content position may shift based on a threshold speed of the assessed motion. The content position may shift linearly based on the speed of the forward motion. The content position may shift non- linearly based on the speed of the forward motion. The content position may shift outside of the field of view. In embodiments, the content is no longer displayed if the speed of movement exceeds a predetermined threshold and will be displayed again once the forward motion slows.

[000296] In embodiments, the content position may generally be referred to as shifting; it should be understood that the term shifting encompasses a process where the movement from one position to another within the see-through FOV or out of the FOV is visible to the wearer (e.g. the content appears to slowly or quickly move and the user perceives the movement itself) or the movement from one position to another may not be visible to the wearer (e.g. the content appears to jump in a discontinuous fashion or the content disappears and then reappears in the new position).

[000297] Another aspect of the present disclosure relates to removing the content from the field of view or shifting it to a position within the field of view that increases the wearer' s view of the surrounding environment when a sensor causes an alert command to be issued. In embodiments, the alert may be due to a sensor or combination of sensors that sense a condition above a threshold value. For example, if an audio sensor detects a loud sound of a certain pitch, content in the field of view may be removed or shifted to provide a clear view of the surrounding environment for the wearer. In addition to the shifting of the content, in embodiments, an indication of why the content was shifted may be presented in the field of view or provided through audio feedback to the wearer. For instance, if a carbon monoxide sensor detects a high concentration in the area, content in the field of view may be shifted to the side of the field of view or removed from the field of view and an indication may be provided to the wearer that there is a high concentration of carbon monoxide in the area. This new information, when presented in the field of view, may similarly be shifted within or outside of the field of view depending on the movement speed of the wearer.

[000298] Figure 41 illustrates how content may be shifted from a neutral position 4104 to an alert position 4108. In this embodiment, the content is shifted outside of the see- through FOV 4102. In other embodiments, the content may be shifted as described herein.

[000299] Another aspect of the present disclosure relates to identification of various vectors or headings related to the HWC 102, along with sensor inputs, to determine how to position content in the field of view. In embodiments, the speed of movement of the wearer is detected and used as an input for position of the content and, depending on the speed, the content may be positioned with respect to a movement vector or heading (i.e. the direction of the movement), or a sight vector or heading (i.e. the direction of the wearer's sight direction). For example, if the wearer is moving very fast the content may be positioned within the field of view with respect to the movement vector because the wearer is only going to be looking towards the sides of himself periodically and for short periods of time. As another example, if the wearer is moving slowly, the content may be positioned with respect to the sight heading because the user may more freely be shifting his view from side to side.

[000300] Figure 42 illustrates two examples where the movement vector may effect content positioning. Movement vector A 4202 is shorter than movement vector B 4210 indicating that the forward speed and/or acceleration of movement of the person associated with movement vector A 4202 is lower than the person associated with movement vector B 4210. Each person is also indicated as having a sight vector or heading 4208 and 4212. The sight vectors A 4208 and B 4210 are the same from a relative perspective. The white area inside of the black triangle in front of each person is indicative of how much time each person likely spends looking at a direction that is not in line with the movement vector. The time spent looking off angle A 4204 is indicated as being more than that of the time spent looking off angle B 4214. This may be because the movement vector speed A is lower than movement vector speed B. The faster the person moves forward the more the person tends to look in the forward direction, typically. The FOVs A 4218 and B 4222 illustrate how content may be aligned depending on the movement vectors 4202 and 4210 and sight vectors 4208 and 4212. FOV A 4218 is illustrated as presenting content in-line with the sight vector 4220. This may be due to the lower speed of the movement vector A 4202. This may also be due to the prediction of a larger amount of time spent looking off angle A 4204. FOV B 4222 is illustrated as presenting content in line with the movement vector 4224. This may be due to the higher speed of movement vector B 4210. This may also be due to the prediction of a shorter amount of time spent looking off angle B 4214.

[000301] Another aspect of the present disclosure relates to damping a rate of content position change within the field of view. As illustrated in Figure 43, the sight vector may undergo a rapid change 4304. This rapid change may be an isolated event or it may be made at or near a time when other sight vector changes are occurring. The wearer' s head may be turning back and forth for some reason. In embodiments, the rapid successive changes in sight vector may cause a damped rate of content position change 4308 within the FOV 4302. For example, the content may be positioned with respect to the sight vector, as described herein, and the rapid change in sight vector may normally cause a rapid content position change; however, since the sight vector is successively changing, the rate of position change with respect to the sight vector may be damped, slowed, or stopped. The position rate change may be altered based on the rate of change of the sight vector, average of the sight vector changes, or otherwise altered.

[000302] Another aspect of the present disclosure relates to simultaneously presenting more than one content in the field of view of a see-through optical system of a HWC 102 and positioning one content with the sight heading and one content with the movement heading. Figure 44 illustrates two FOV's A 4414 and B 4420, which correspond respectively to the two identified sight vectors A 4402 and B 4404. Figure 44 also illustrates an object in the environment 4408 at a position relative to the sight vectors A 4402 and B 4404. When the person is looking along sight vector A 4402, the environment object 4408 can be seen through the field of view A 4414 at position 4412. As illustrated, sight heading aligned content is presented as TEXT in proximity with the environment object 4412. At the same time, other content 4418 is presented in the field of view A 4414 at a position aligned in correspondence with the movement vector. As the movement speed increases, the content 4418 may shift as described herein. When the sight vector of the person is sight vector B 4404 the environmental object 4408 is not seen in the field of view B 4420. As a result, the sight aligned content 4410 is not presented in field of view B 4420; however, the movement aligned content 4418 is presented and is still dependent on the speed of the motion.

[000303] Figure 45 shows an example set of data for a person moving through an environment over a path that starts with a movement heading of 0 degrees and ends with a movement heading of 114 degrees during which time the speed of movement varies from 0 m/sec to 20 m/sec. The sight heading can be seen to vary on either side of the movement heading while moving as the person looks from side to side. Large changes in sight heading occur when the movement speed is 0 m/sec when the person is standing still, followed by step changes in movement heading.

[000304] Embodiments provide a process for determining the display heading that takes into account the way a user moves through an environment and provides a display heading that makes it easy for the user to find the displayed information while also providing unencumbered see-through views of the environment in response to different movements, speed of movement or different types of information being displayed.

[000305] Figure 46 illustrates a see-through view as may be seen when using a HWC wherein information is overlaid onto a see-through view of the environment. The tree and the building are actually in the environment and the text is displayed in the see-through display such that it appears overlaid on the environment. In addition to text information such as, for example, instructions and weather information, some augmented reality information is shown that relates to nearby objects in the environment.

[000306] In an embodiment, the display heading is determined based on speed of movement. At low speeds, the display heading may be substantially the same as the sight heading while at high speed the display heading may be substantially the same as the movement heading. In embodiments, as long as the user remains stationary, the displayed information is presented directly in front of the user and HWC. However, as the movement speed increases (e.g. above a threshold or continually, etc.) the display heading becomes substantially the same as the movement heading regardless of the direction the user is looking, so that when the user looks in the direction of movement, the displayed information is directly in front of the user and HMD and when the user looks to the side the displayed information is not visible.

[000307] Rapid changes in sight heading can be followed by a slower change in the display heading to provide a damped response to head rotation. Alternatively, the display heading can be substantially the time averaged sight heading so that the displayed information is presented at a heading that is in the middle of a series of sight headings over a period of time. In this embodiment, if the user stops moving their head, the display heading gradually becomes the same as the sight heading and the displayed information moves into the display field of view in front of the user and HMD. In embodiments, when there is a high rate of sight heading change, the process delays the effect of the time averaged sight heading on the display heading. In this way, the effect of rapid head movements on display heading is reduced and the positioning of the displayed information within the display field of view is stabilized laterally.

[000308] In another embodiment, display heading is determined based on speed of movement where at high-speed, the display heading is substantially the same as the movement heading. At mid-speed the display heading is substantially the same as a time averaged sight heading so that rapid head rotations are damped out and the display heading is in the middle of back and forth head movements.

[000309] In yet another embodiment, the type of information being displayed is included in determining how the information should be displayed. Augmented reality information that is connected to objects in the environment is given a display heading that substantially matches the sight heading. In this way, as the user rotates their head, augmented reality information comes into view that is related to objects that are in the see-through view of the environment. At the same time, information that is not connected to objects in the environment is given a display heading that is determined based on the type of movements and speed of movements as previously described in this specification.

[000310] In yet a further embodiment, when the speed of movement is determined to be above a threshold, the information displayed is moved downward in the display field of view so that the upper portion of the display field of view has less information or no information displayed to provide the user with an unencumbered see-through view of the environment.

[000311] Figures 47 and 48 show illustrations of a see-through view including overlaid displayed information. Figure 47 shows the see-through view immediately after a rapid change in sight heading from the sight heading associated with the see-through view shown in Figure 46 wherein the change in sight heading comes from a head rotation. In this case, the display heading is delayed. Figure 48 shows how at a later time, the display heading catches up to the sight heading. The augmented reality information remains in positions within the display field of view where the association with objects in the environment can be readily made by the user.

[000312] Figure 49 shows an illustration of a see-through view example including overlaid displayed information that has been shifted downward in the display field of view to provide an unencumbered see-through view in the upper portion of the see-through view. At the same time, augmented reality labels have been maintained in locations within the display field of view so they can be readily associated with objects in the environment.

[000313] In a further embodiment, in an operating mode such as when the user is moving in an environment, digital content is presented at the side of the user's see-through FOV so that the user can only view the digital content by turning their head. In this case, when the user is looking straight ahead, such as when the movement heading matches the sight heading, the see-through view FOV does not include digital content. The user then accesses the digital content by turning their head to the side whereupon the digital content moves laterally into the user' s see-through FOV. In another embodiment, the digital content is ready for presentation and will be presented if an indication for its presentation is received. For example, the information may be ready for presentation and if the sight heading or predetermined position of the HWC 102 is achieved the content may then be presented. The wearer may look to the side and the content may be presented. In another embodiment, the user may cause the content to move into an area in the field of view by looking in a direction for a predetermined period of time, blinking, winking, or displaying some other pattern that can be captured through eye imaging technologies (e.g. as described herein elsewhere). [000314] In yet another embodiment, an operating mode is provided wherein the user can define sight headings wherein the associated see-through FOV includes digital content or does not include digital content. In an example, this operating mode can be used in an office environment where when the user is looking at a wall digital content is provided within the FOV, whereas when the user is looking toward a hallway, the FOV is unencumbered by digital content. In another example, when the user is looking horizontally digital content is provided within the FOV, but when the user looks down (e.g. to look at a desktop or a cellphone) the digital content is removed from the FOV.

[000315] Another aspect of the present disclosure relates to collecting and using eye position and sight heading information. Head worn computing with motion heading, sight heading, and/or eye position prediction (sometimes referred to as "eye heading" herein) may be used to identify what a wearer of the HWC 102 is apparently interested in and the information may be captured and used. In embodiments, the information may be

characterized as viewing information because the information apparently relates to what the wearer is looking at. The viewing information may be used to develop a personal profile for the wearer, which may indicate what the wearer tends to look at. The viewing information from several or many HWC's 102 may be captured such that group or crowd viewing trends may be established. For example, if the movement heading and sight heading are known, a prediction of what the wearer is looking at may be made and used to generate a personal profile or portion of a crowd profile. In another embodiment, if the eye heading and location, sight heading and/or movement heading are known, a prediction of what is being looked at may be predicted. The prediction may involve understanding what is in proximity of the wearer and this may be understood by establishing the position of the wearer (e.g. through GPS or other location technology) and establishing what mapped objects are known in the area. The prediction may involve interpreting images captured by the camera or other sensors associated with the HWC 102. For example, if the camera captures an image of a sign and the camera is in-line with the sight heading, the prediction may involve assessing the likelihood that the wearer is viewing the sign. The prediction may involve capturing an image or other sensory information and then performing object recognition analysis to determine what is being viewed. For example, the wearer may be walking down a street and the camera that is in the HWC 102 may capture an image and a processor, either on-board or remote from the HWC 102, may recognize a face, object, marker, image, etc. and it may be determined that the wearer may have been looking at it or towards it. [000316] Figure 50 illustrates a cross section of an eyeball of a wearer of an HWC with focus points that can be associated with the eye imaging system of the disclosure. The eyeball 5010 includes an iris 5012 and a retina 5014. Because the eye imaging system of the disclosure provides coaxial eye imaging with a display system, images of the eye can be captured from a perspective directly in front of the eye and inline with where the wearer is looking. In embodiments of the disclosure, the eye imaging system can be focused at the iris 5012 and/or the retina 5014 of the wearer, to capture images of the external surface of the iris 5012 or the internal portions of the eye, which includes the retina 5014. Figure 50 shows light rays 5020 and 5025 that are respectively associated with capturing images of the iris 5012 or the retina 5014 wherein the optics associated with the eye imaging system are respectively focused at the iris 5012 or the retina 5014. Illuminating light can also be provided in the eye imaging system to illuminate the iris 5012 or the retina 5014. Figure 51 shows an illustration of an eye including an iris 5130 and a sclera 5125. In embodiments, the eye imaging system can be used to capture images that include the iris 5130 and portions the sclera 5125. The images can then be analyzed to determine color, shapes and patterns that are associated with the user. In further embodiments, the focus of the eye imaging system is adjusted to enable images to be captured of the iris 5012 or the retina 5014. Illuminating light can also be adjusted to illuminate the iris 5012 or to pass through the pupil of the eye to illuminate the retina 5014. The illuminating light can be visible light to enable capture of colors of the iris 5012 or the retina 5014, or the illuminating light can be ultraviolet (e.g. 340nm), near infrared (e.g. 850nm) or mid-wave infrared (e.g. 5000nm) light to enable capture of hyperspectral characteristics of the eye.

[000317] Figure 53 illustrates a display system that includes an eye imaging system. The display system includes a polarized light source 2958, a DLP 2955, a quarter wave film 2957 and a beam splitter plate 5345. The eye imaging system includes a camera 3280, illuminating lights 5355 and beam splitter plate 5345. Where the beam splitter plate 5345 can be a reflective polarizer on the side facing the polarized light source 2958 and a hot mirror on the side facing the camera 3280. Wherein the hot mirror reflects infrared light (e.g.

wavelengths 700 to 2000nm) and transmits visible light (e.g. wavelengths 400 to 670nm). The beam splitter plate 5345 can be comprised of multiple laminated films, a substrate film with coatings or a rigid transparent substrate with films on either side. By providing a reflective polarizer on the one side, the light from the polarized light source 2958 is reflected toward the DLP 2955 where it passes through the quarter wave film 2957 once, is reflected by the DLP mirrors in correspondence with the image content being displayed by the DLP 2955 and then passes back through the quarter wave film 2957. In so doing, the polarization state of the light from the polarized light source is changed, so that it is transmitted by the reflective polarizer on the beam splitter plate 5345 and the image light 2971 passes into the lower optics module 204 where the image is displayed to the user. At the same time, infrared light 5357 from the illuminating lights 5355 is reflected by the hot mirror so that it passes into the lower optics module 204 where it illuminates the user' s eye. Portions of the infrared light 2969 are reflected by the user's eye and this light passes back through the lower optics module 204, is reflected by the hot mirror on the beam splitter plate 5345 and is captured by the camera 3280. In this embodiment, the image light 2971 is polarized while the infrared light 5357 and 2969 can be unpolarized. In an embodiment, the illuminating lights 5355 provide two different infrared wavelengths and eye images are captured in pairs, wherein the pairs of eye images are analyzed together to improve the accuracy of identification of the user based on iris analysis.

[000318] Figure 54 shows an illustration of a further embodiment of a display system with an eye imaging system. In addition to the features of Figure 53, this system includes a second camera 5460. Wherein the second camera 5460 is provided to capture eye images in the visible wavelengths. Illumination of the eye can be provided by the displayed image or by see-through light from the environment. Portions of the displayed image can be modified to provide improved illumination of the user' s eye when images of the eye are to be captured such as by increasing the brightness of the displayed image or increasing the white areas within the displayed image. Further, modified displayed images can be presented briefly for the purpose of capturing eye images and the display of the modified images can be synchronized with the capture of the eye images. As shown in Figure 54, visible light 5467 is polarized when it is captured by the second camera 5460 since it passes through the beam splitter 5445 and the beam splitter 5445 is a reflective polarizer on the side facing the second camera 5460. In this eye imaging system, visible eye images can be captured by the second camera 5460 at the same time that infrared eye images are captured by the camera 3280. Wherein, the characteristics of the camera 3280 and the second camera 5460 and the associated respective images captured can be different in terms of resolution and capture rate.

[000319] Figure 52a and 52b illustrate captured images of eyes where the eyes are illuminated with structured light patterns. In Figure 52a, an eye 5220 is shown with a projected structured light pattern 5230, where the light pattern is a grid of lines. A light pattern of such as 5230 can be provided by the light source 5355 show in Figure 53 by including a diffractive or a refractive device to modify the light 5357 as are known by those skilled in the art. A visible light source can also be included for the second camera 5460 shown in Figure 54 which can include a diffractive or refractive to modify the light 5467 to provide a light pattern. Figure 52b illustrates how the structured light pattern of 5230 becomes distorted to 5235 when the user's eye 5225 looks to the side. This distortion comes from the fact that the human eye is not spherical in shape, instead the iris sticks out slightly from the eyeball to form a bump in the area of the iris. As a result, the shape of the eye and the associated shape of the reflected structured light pattern is different depending on which direction the eye is pointed, when images of the eye are captured from a fixed position.

Changes in the structured light pattern can subsequently be analyzed in captured eye images to determine the direction that the eye is looking.

[000320] The eye imaging system can also be used for the assessment of aspects of health of the user. In this case, information gained from analyzing captured images of the iris 5012 is different from information gained from analyzing captured images of the retina 5014. Where images of the retina 5014 are captured using light 5357 that illuminates the inner portions of the eye including the retina 5014. The light 5357 can be visible light, but in an embodiment, the light 5357 is infrared light (e.g. wavelength 1 to 5 microns) and the camera 3280 is an infrared light sensor (e.g. an InGaAs sensor) or a low resolution infrared image sensor that is used to determine the relative amount of light 5357 that is absorbed, reflected or scattered by the inner portions of the eye. Wherein the majority of the light that is absorbed, reflected or scattered can be attributed to materials in the inner portion of the eye including the retina where there are densely packed blood vessels with thin walls so that the absorption, reflection and scattering are caused by the material makeup of the blood. These

measurements can be conducted automatically when the user is wearing the HWC, either at regular intervals, after identified events or when prompted by an external communication. In a preferred embodiment, the illuminating light is near infrared or mid infrared (e.g. 0.7 to 5 microns wavelength) to reduce the chance for thermal damage to the wearer' s eye. In another embodiment, the polarizer 3285 is antireflection coated to reduce any reflections from this surface from the light 5357, the light 2969 or the light 3275 and thereby increase the sensitivity of the camera 3280. In a further embodiment, the light source 5355 and the camera 3280 together comprise a spectrometer wherein the relative intensity of the light reflected by the eye is analyzed over a series of narrow wavelengths within the range of wavelengths provided by the light source 5355 to determine a characteristic spectrum of the light that is absorbed, reflected or scattered by the eye. For example, the light source 5355 can provide a broad range of infrared light to illuminate the eye and the camera 3280 can include: a grating to laterally disperse the reflected light from the eye into a series of narrow wavelength bands that are captured by a linear photodetector so that the relative intensity by wavelength can be measured and a characteristic absorbance spectrum for the eye can be determined over the broad range of infrared. In a further example, the light source 5355 can provide a series of narrow wavelengths of light (ultraviolet, visible or infrared) to sequentially illuminate the eye and camera 3280 includes a photodetector that is selected to measure the relative intensity of the series of narrow wavelengths in a series of sequential measurements that together can be used to determine a characteristic spectrum of the eye. The determined characteristic spectrum is then compared to known characteristic spectra for different materials to determine the material makeup of the eye. In yet another embodiment, the illuminating light 5357 is focused on the retina 5014 and a characteristic spectrum of the retina 5014 is determined and the spectrum is compared to known spectra for materials that may be present in the user's blood. For example, in the visible wavelengths 540nm is useful for detecting hemoglobin and 660nm is useful for differentiating oxygenated hemoglobin. In a further example, in the infrared, a wide variety of materials can be identified as is known by those skilled in the art, including: glucose, urea, alcohol and controlled substances. Figure 55 shows a series of example spectrum for a variety of controlled substances as measured using a form of infrared spectroscopy (ThermoScientific Application Note 51242, by C. Petty, B. Garland and the Mesa Police Department Forensic Laboratory, which is hereby incorporated by reference herein). Figure 56 shows an infrared absorbance spectrum for glucose (Hewlett Packard Company 1999, G. Hopkins, G. Mauze; "In- vivo NIR Diffuse-reflectance Tissue Spectroscopy of Human Subjects," which is hereby incorporated by reference herein).

United States Patent 6675030, which is hereby incorporated by reference herein, provides a near infrared blood glucose monitoring system that includes infrared scans of a body part such as a foot. United States Patent publication 2006/0183986, which is hereby incorporated by reference herein, provides a blood glucose monitoring system including a light measurement of the retina. Embodiments of the present disclosure provide methods for automatic measurements of specific materials in the user' s blood by illuminating at one or more narrow wavelengths into the iris of the wearer's eye and measuring the relative intensity of the light reflected by the eye to identify the relative absorbance spectrum and comparing the measured absorbance spectrum with known absorbance spectra for the specific material, such as illuminating at 540 and 660nm to determine the level of hemoglobin present in the user' s blood. [000321] Another aspect of the present disclosure involves using a head- worn computer as a therapeutic device. The head-worn computer can provide light, content, sound, haptic feedback, etc. all in a coordinated and regulated fashion to affect a user. It has been shown that each of the several effects that can be generated by a head- worn computer can affect how a user is affected. For instance, certain studies suggest that the brain

communicates within itself at a frequency of 40Hz and that changes in the brain can be affected by reinforcing the 40Hz with external stimulus. This has been shown to effect people with Alzheimer's. As another example, Seasonal Affect Disorder ("SAD") is a disorder that effects people when they do not have enough light during the shorter winter months. It has been shown that high brightness lighting at the right time of day (e.g. early morning) can improve a person's SAD induced disorder. By way of a further example, the head- worn computer may operate in a time-zone adjustment mode where the presentation of light, content, sound, etc. can be tailored and coordinated to help a traveler adjust to a new time-zone. The traveler may be traveling by plane, for example, crossing several time-zones, and the head-computer may cause a stimulus to be presented such that the person makes a transition.

[000322] In addition to delivering therapy, the head- worn computer may keep track of when therapy sessions take place, are due, have been missed, etc. The head-worn computer may also be used to provide the user with feedback on therapy sessions (e.g. how long each one was, which stimuli was applied and how it was applied), etc. In embodiments, the head- worn computer may be used to conduct an evaluation or test of the person' s performance based on the provided therapy. The evaluation may be completed by providing content in a display of the head- worn computer (e.g. a see-through display, a non-see-through display). The evaluation may involve the user answering questions or providing other direct response. The evaluation may involve eye imaging, motion detection, or other automatic systems for interpreting a test. For example, with eye tracking, the user' s eye can be evaluated to understand pupil response, blink rate, eye movement patters, etc. and an inertial measurement unit ("IMU") may be used to evaluate head, body, body part, etc. movement. Each of the measured responses may be compared to prior responses, a standard, etc., in an effort to understand the effectiveness of the therapy and/or to adjust future therapy sessions. This type of therapy /evaluation/feedback can be used to treat any disorder that is affected by one of the provided stimulus.

[000323] A head-worn computer with see-through ability may further be used during normal activities in life such that the stimuli can be in the 'background' such that the person is not generally aware of the stimuli and can continue daily activities while getting therapy. In embodiments, the head-worn computer may provide therapy in a subtle fashion such that the therapy can be provided while the user is performing ordinary life tasks. For example, the head- worn computer may have a see-through display such that the user can see the surrounding environment through the display while light, content, sound, haptic feedback, etc. are provided to the user. In embodiments, a 40Hz dominate frequency may be used to provide one or more stimulus (e.g. light, sound, vibration, etc.) through the head-worn computer while the user is doing otherwise normal life activities (e.g. walking, driving, watching TV, or otherwise doing what is a usual life task).

[000324] A head-worn computer according to the principles disclosed herein may be used to treat a disease affecting the brain (e.g. Alzheimers, Dementia, memory loss, depression, anxiety, PTSD, etc.). In certain situations, the brain can be affected by being exposed to stimuli at a specific frequency (e.g. 40 Hz). The head-worn computer may operate by causing its internal lighting system to produce light (e.g. high brightness light or content, low brightness light or content) that essentially pulses at 40Hz. The pulsing light at 40Hz, or other effective frequency or pattern, can be applied for a prescribed period of time to help the brain find its internal communication rhythm once again. In embodiments, more than one stimulus may be applied at the same time. For example, light, sound and vibration may all be pulsed at 40Hz to cause a change in the user's behavior. The more than one stimuli may be coordinated such that they all occur in sync or out of sync, depending on the therapy, person and situation.

[000325] In embodiments, the lenses of the head-worn computer may be changed to cause either a brighter scene for the user or a darker scene for the user. A darker scene may be more desirable when the therapy is being applied when the person is stationary, for example. A lighter (e.g. more transparency in the see-through display) may be more desirable for a situation where the person is wearing the head- worn computer while moving around. To make the stimuli more immersive, a shroud (e.g. as described herein) may be mounted on the head- worn computer to shut out more of the surrounding environmental light. In a further embodiment, the user may be provided with a user interface to change the intensity of each stimulus. In embodiments, the intensity may be changed, regulated or modulated automatically depending on the therapy or responses to the therapy.

[000326] In embodiments, the therapeutic stimuli may be provided while providing content (e.g. entertainment content) to make the therapy session more enjoyable and possibly more effective. For example, a movie with audio and haptics may be provided to the user and simultaneously with a 40Hz dominate frequency. The movie content, or other entertainment content (e.g. music, images, a game, etc.), may 'pulse' at 40Hz at an intensity that is sufficient to affect the brain. In embodiments, the content is provided with a highlighted element to keep the user's attention. For example, a relatively large ball may be provided as the content and the ball may pulse at 40Hz. In embodiments, the color of the content may be adjusted depending on the therapy or the user. For example, some patients become less sensitive to particular colors as the eyes change with age or are affected by disease or injury. If a person has become desensitized to green, for example, the content or light may be caused to be more blue or red. In embodiments, a color may be selected or changed based on testing that suggests the user is more sensitive to certain colors or that certain colors provide more effective therapy. In embodiments, the colors may change during the therapy. For example, the light may modulate between red, green and blue, or other colors, at the 40Hz rate or some multiple of the 40Hz rate. In embodiments, the color more slowly changes during the therapy (e.g. over a period of second, minutes, or hours).

[000327] As another example, the head-worn computer may be used to affect a user with SAD. The user may be prompted to use the head-worn computer at a certain time of day (e.g. in the morning). In embodiments, the stimulus (e.g. color(s) of the light, timing of therapy reminders, sounds, haptics) may be selected, regulated and modulated based on the person, geographic location, time of year, time of day, weather, etc. For example, the user may be prompted to use the head-worn computer first thing in the morning and the light produced may be relatively high intensity with a reddish or orange hue to simulate the morning sky. In other embodiments, the light first thing in the morning may be bluish to simulate a mid-day sky if it is more effective. The color may change over the course of the therapy to simulate time changes in the day light color. In embodiments, the head- worn computer automatically changes the timing and the stimulus based on the geographic location of the user. If the user is up north, he may be more susceptible to SAD so the therapy routine may be different for him (e.g. more intense, more time, more sessions in the day) as compared to when he is in a southern location. Time of year and weather predictions or current weather may also affect how and when the stimulus is applied. If it is winter time in the northern hemisphere and the user is in the north with thick cloudy weather, the head- worn computer may be adapted to provide altered stimulus (e.g. more intense, more time, more sessions in the day) as compared to a sunny day in the south.

[000328] As yet another example, the head-worn computer may be used to ease the negative effects of traveling across time zones. In embodiments, the user may be prompted to wear the head- worn computer at one or more times during travel such that stimulus can be applied to affect the user's mood, alertness, anxiety, mental state, etc. For example, the user may be prompted to wear the head- worn computer an hour before landing in a different time zone, mid-way through the flight, or at other times. The routine and stimulus applied may be effected by the number of time zones, time of year, time of day, time of day at leaving, time of arrival, country changes, duration of travel, etc. The head-worn computer may

automatically configure itself as a travel companion based on it knowing the user' s flight information and conditions at the place of leaving and the place of arriving. For example, if the person is flying at night and arriving in the morning, the head- worn computer may only alert the user for a therapy session before landing to cause the person to become alert and awake. If the person is flying through the day, the head-worn computer may cause 'daylight' to be simulated at several points during the flight. In embodiments, the head- worn computer may continue to provide the stimulus while allowing the person to see-through to the environment.

[000329] While many of the embodiments herein involve a computer display, it should be understood that a therapy session according to the principles of the present disclosure may be provided with a lighting system that does not necessarily provide content. The lighting system may be a LED(s), OLED, LCD, etc. to provide the desired light without the need for content.

[000330] An aspect of the present invention relates to providing an intuitive user interface for a head- worn computer, wherein the user interface includes a rotary style physical interface (e.g. a dial, track, etc.) in combination with a direction selection device (e.g. a button, active touch surface, capacitive touch pad, etc.). The inventors have discovered that the combination of a rotary style interface with a separate actuator provides an intuitive physical interface to navigate a graphical user interface in a head- worn computer display. The inventors discovered that it is difficult to navigate within the head-worn computer' s graphic user interface when the controls are mounted on the head-worn computer. The user cannot see the interface in this situation. The inventors also discovered that causing a rotary dial encoder style user interface with period stops (i.e. mechanical features in the rotary device that cause it to 'click' or otherwise pause into a next spot on as the rotary dial moves) allows the user interface to be configured such that a graphical selection element (e.g. cursor) in the graphical user interface 'snaps' from one selectable item to the next in correspondence to the mechanical stops of the rotary device. This makes moving from item to item feel mechanically connected to the action in the graphical user interface. In addition, the direction selection button can be used to regulate in which direction the selection element moves in the graphical user interface (GUI). For example, if the GUI includes a two- dimensional matrix of selectable items (e.g. icons), than the direction selection element may be activated once to cause the rotary device to move a cursor right and left, while an additional activation may then cause the cursor to move up and down. Without the separate direction control interface, the user may have to scroll through the items in one axis (e.g. row by row).

[000331] Figure 57 illustrates a head- worn computer 102 with a rotary style physical user interface 57002 mounted on an arm of the head-worn computer 102 along with a direction selection control device 57004, which is also mounted on the arm of the head-worn computer 102. The placement of the various elements mounted on the head-worn computer as illustrated in Figure 57 are provided for illustrative purposes only. The inventors envision that the physical user interfaces (e.g. dial 57002 and direction selector 57004) may be otherwise mounted on the head-worn computer 102. For example, either interface may be mounted on a top, bottom, side of a surface of the head-worn computer. In embodiments, while the two physical user interfaces may operate in coordination or cooperation within the GUI, the two may be placed in separate places on the head-worn computer 102. For example, they may be mounted on separate arms of the head-worn computer 102. As another example, the direction selector may be mounted on the top of the arm and the rotary dial may be mounted on the bottom. The direction selector may be mounted such that the it is Out of the way' of the user's interactions with the dial, but in a proximity that makes it convenient to interact with. For example, the direction selector may be a button and it may be mounted on a top surface of the arm in a region generally above a bottom arm mounted dial interface, but the button may be offset (e.g. shifted forward or backwards from a centerline of the dial) such that the user can interact with the dial by grabbing the top of the arm with an index finger and the dial with the thumb without touching the direction selection button. However, the direction selection button may be close enough to a centerline of the dial that a small shift in the user' s index finger allows an interaction with the direction selection button. Figure 57 illustrates an embodiment where the dial 57002 is mounted on a bottom surface of the arm of the head-worn computer 102 and the direction selection button 57004 is mounted on a side of the same arm. In this configuration, the user may interact with the dial by grabbing the top of the arm with his index finger and the dial with his thumb. When the user wants to change the scroll direction of a cursor or other element in the GUI, he may slip his finger or thumb to the direction selection button 57004 for the interaction. [000332] The rotary style physical interface 57002 may have mechanically derived stops or pause points (as discussed above). It also may have a mechanically derived selection activation system, such as an ability to accept a selection instruction in conjunction with the motion control of a graphical selection element. For example, the rotary device may be mechanically adapted such that the user can rotate a dial but also press through a centerline of the dial towards the center of the dial to affect a 'click' or selection. The rotary style physical interface need not be round, as illustrated in Figure 57. It may be oval, rectangular, square, etc., so long as the mechanical action causes the user to feel that he is rotating through selections.

[000333] The direction selection device 57004 may be a mechanical device (e.g. button, switch, etc.), capacitive sensor, proximity detector, optical sensor or other interface adapted to accept a user's physical input. It may also be programmed such that different patterns of interaction cause different commands to be generated. For example, a single touch or activation may cause a direction of scroll to be changes and a double touch or activation may cause a GUI element selection to be made.

[000334] Figure 57 also illustrates two GUI environments 57018a and 57018b that may be presented in a display of the head- worn computer 102. GUI environment 57018a illustrates a set of selectable elements 57008 (e.g. application launch icons). As a user rotates the rotary style physical input device 57002 and the rotary device 'clicks' from stop to stop, the identity of which icon in the GUI to be selected hops horizontally. At the end of row, it may snap to the next row to make a continuation feel of the dial. The direction selection device 57004 controls which direction 57012 is followed when the user turns the rotary device. If the rotary device is hopping icons in a row format (i.e. horizontally), an activation of the direction selection device may cause the same rotary action to move the icon hopping in a column format (i.e. vertically).

[000335] GUI environment 57018b illustrates an application environment. Following the launch of an application 57010 the application environment may appear in the display of the head-worn computer and the rotary style input device and the direction selection device may affect the direction 57014 and degree of movement within the application environment. For example, following the launch of an application the rotary style input device may be programmed to cause an up and down scrolling within the application environment. In the event that the user would like to then scroll horizontally, he may activate the direction selection device to cause the rotary style input device to then cause a horizontal movement. [000336] The inventors have also discovered that including haptic feedback in conjunction with interactions with the physical user interface devices can provide further guidance to the user. For example, when turning the rotary device or activating the direction selection device, a haptic system (e.g. as described herein) may be used to provide haptic feedback. The haptic system may have fine control over multiple haptic sensations (e.g. slight vibration, strong vibration, escalating vibration, de-escalating vibration, etc.) and the rotary movement or direction selection may cause a particular pattern to cause a particular sensation. Progressive movements of a dial interface may cause a particular pattern. A fast shift of the dial may cause a different type of haptic feedback than a slower interaction, etc.

[000337] Another aspect of the present invention relates to identifying the relative proximity of a user's fingers with respect to various user interface controls that are mounted on a head- worn computer such that a visual depiction of the proximity can be provided to the user in a display of the head-worn computer. The inventors have discovered that it can be difficult to identify where certain user control features are located when the controls are mounted on the head- worn computer because the only feedback the user receives, generally speaking, is that of his basic touch and memory of the layout of the interfaces. The inventors have further discovered that providing proximity detection near and around the various head- worn computer mounted user interfaces along with a visual depiction, in the head- worn computer display, of the detected proximity of the fingers with the various user interface elements provides guidance and a more intuitive user interface experience for the user.

[000338] Figure 58 illustrates a head-worn computer 102 with a physical user interface 57002 and a proximity detection system 58002. The proximity detection system 58002 may be arranged to sense a user's interaction with the head- worn computer 102. The information from the proximity detection system 58002 may be used to generate a representation 58004 of the head-worn computer 102, or portion thereof, with an indication of where the user's physical or proximate interaction for presentation in the display of the head-worn computer 102. The representation 58004 of the proximate interaction of the user with the head-worn computer 102 may be presented in a number of ways: horizontally, vertically, 2D, 3D, perspective 3D view, etc. The proximity detection system 58002 may have one or more detectors. Two or more detectors may be used to improve the sensitivity or coverage of the system. The detectors may be mounted on any of the head-worn computer surfaces. In embodiments, the proximity detectors are mounted on the surfaces that include user interface elements. As indicated herein, user interface elements may be mounted on any surface of the head-worn computer. In embodiments, the proximity detector is configured as a ring or other form that mounts around, partially around or in proximity to a user interface element. For example, a user control button may include a capacitive ring that detects interactions near the button.

[000339] Another aspect of the present invention relates to predicting the proximity of a user' s interaction with a user interface mounted on a head- worn computer and causing a haptic feedback that helps guide the user to the user interface. The haptic system (e.g. as described herein) may produce variable output such that the intensity can be used to guide a user towards a user interface element. This may be done in coordination with a visual representation of the interaction proximity (e.g. as discussed above).

[000340] Another aspect of the present invention relates to a intuitive user interface for a head- worn computer that provides a physical interface and visual indication for the control of aspects of the head-worn computer such as volume of the audio and brightness of the image in a see-through display. For example, referring back to Figure 57, a physical user interface 57002, such as a dial or capacitive touch surface, may be used to control a level of volume for the audio produced by the head- worn computer and/or control the brightness of the displayed content in a see-through computer display of the head-worn computer. Further, in embodiments, the direction selection device 57004 may be used to select between two or more controllable aspects of the head-worn display. An indication of what aspect is being controlled (e.g. the volume or brightness) and/or level of the aspect may be presented as content in the see-through display. For example, if a user presses the direction selection device 57004 an indication of which controllable is presently controllable is active aspect may be presented in the display so the user knows which aspect can be controlled with the physical user interface 57002. In addition, in embodiments, a level or setting of the controllable aspect may be presented in the display. The user may then be able to see an indication of the volume or brightness setting as they control it.

[000341] Another aspect of the present invention relates to a 'frame tap' interface for a head-worn computer. The head- worn computer may have no user controls mounted on the head-worn computer or it may have one or more user controls mounted on it. In

embodiments, the head- worn computer is equipped with an inertial measurement unit positioned and adapted to detect when the user 'taps' the head-worn computer as an indication that the user wants to control an aspect of the software operating on the head-worn computer. The inertial measurement unit may be associated with a processor and memory such that tap or touch signatures can be recognized. The tap or touch signatures may be updated based on the particular user's actions. This may be done through computer learning. A pallet of actions associated with types of taps (e.g. single tap, double tap, hard tap, light tap, front frame tap, temple tap, etc.) may be provided such that the user can make the associations he or she desires. In embodiments, the frame tap control may be one form of control and it may be used in connection with another form of control. For example, if a rotary selector is provided on the head-worn computer (e.g. as described herein elsewhere) the rotary selector may be used to move through a set of icons or within an application and then the frame tap may be used to select an item or launch an application. The frame tap may be used in connection with a rotary interface, touch interface, button interface, switch interface, capacitive interface, strain gauge user interface, etc. Figure 59 illustrates a head- worn computer that performs an action based on a frame tap 5902 or touch.

[000342] Another aspect of the present invention relates to a head-worn computer with a strain gauge user interface. The strain gauge may be a device adapted to measure strain on a platform. The strain gauge may be mounted on a user interface, external user interface, head- worn computer, etc. The strain gauge may be connected to a processor and memory such that the processor can interpret user interactions with the strain gauge. The strain gauge user interface may be adapted as a single action device (i.e. it performs one user interface function), multiple action device, scroll device (e.g. initiating scroll graphical user interface actions with a swipe on strain gauge surface), etc. In embodiments, a strain gauge user interface arrangement may be used in connection with another form of user interface (e.g. as described herein elsewhere).

[000343] Figure 60 illustrates embodiments of devices with strain gauge user interfaces 6002. The examples show the strain gauge user interfaces mounted on a head- worn computer and an external user controller. In addition, in embodiments, the strain gauge user interface 6002 may have physical feature(s) 6004 that are easy to feel and find for the user and indicative of a control touch area. In embodiments, the strain gauge user interface may be associated with a proximity detection system that identifies where along the frame the user is touching (e.g. as described herein elsewhere).

[000344] Another aspect of the present invention relates to a detection that a head- worn computer has or is about to be mounted on the user' s head. The detection system may be a proximity detection system, capacitive detection system, mechanical detection system, etc. For example, proximity sensors may be mounted in the frame of the head-worn computer at a place that touches or becomes close to the user' s head once it is mounted (e.g. in the front frame that touches or comes close to the forehead, in a temple section, in a ear horn section, etc.). Once the proximity sensor, for example, detects the forehead, the head- worn computer may turn on. It may be advantageous to have several proximity detectors to properly indicate that the head-worn computer has been mounted on the user's head. For example, the user may pick up the computer by an arm of the computer, but since it is not yet mounted on the head, one may not want to activate the computer. In embodiments, a second proximity detector (e.g. in the other arm, in the forehead region) may be desirable such that the head- worn computer only infers that it has been mounted once more than one proximity detector has been activated. In embodiments, the proximity detectors may be used to turn the computer on before it is mounted on the head. For example, it may start once one in the arm has been activated indicating that the device is about to be mounted on the head.

[000345] Another aspect of the present invention relates to a modular expansion system for a head-worn computer. In embodiments, the head-worn computer has a connection system (e.g. including a mechanical and electrical connection) such that a functional module can be mounted and removed from the head-worn computer. This arrangement makes possible a functional expansion or modification of functions of the head- worn computer through a removable and replaceable modular system. For example, the head- worn computer may not natively have a depth sensing system and a replaceable module adapted to provide depth sensing may be mounted on the head- worn computer to provide the function. In embodiments, a processor in the head-worn computer reads data from the module to identify it and/or its functions such that the module can be properly operated and data from the module can be properly used. In embodiments, the identification is accomplished through a predetermined code.

[000346] In embodiments, the head-worn computer includes an electrical connector adapted to electrically connect with a modular expansion module, wherein the modular expansion module adds a capability to the head- worn computer and is removably mounted to the head- worn computer; and a mount adapted to physically secure the modular expansion module to the head-worn computer. In embodiments, the mount may be a magnetic element to physically secure the modular expansion module. In embodiments, the mount may include a snap fit element(s) to physically secure the modular expansion module. In embodiments, the mount may include an alignment element(s) to physically align the modular expansion system on the head-worn computer. In embodiments, the head-worn computer may further include a processor adapted to recognize a function of the modular expansion module, when connected to the modular expansion module, such that the modular expansion module operates in accordance with a schema identified for the function. In embodiments, the expansion module may perform a function related to optical modification for a camera in the head-worn computer (e.g. a zoom lens), it may contain a self-contained camera, a sensor, sensor set, stereo/multi- cameras, depth sensor, radar, tracking IR emitting diodes, tracking IR photodiodes, IMU(s), sound or haptic transducers, light emitters, magnetic field sensors, magnetic field emitters, EMI tracking (e.g. coils), electrochromic window, liquid crystal window, shutter, wind sensor, scent sensor, etc. In embodiments, the modular expansion system may be powered by a battery or other power source of the head- worn computer or have internal power. In embodiments, when the expansion module has internal power, the module may also operate when it is removed from the head- worn computer. For example, the expansion module may include a camera and a low power data connection (e.g.

Bluetooth) such that the expansion module can be placed on a table in front of the user so the user can video himself (e.g. for a video conference).

[000347] Figure 61 illustrates a head-worn computer 102 adapted to receive an expansion module 6102. The expansion module 6102 electrically connects to the HWC 102 through an electrical connector 6104. The HWC 102 and expansion module 6102 may have mechanical, magnetic or other securing mechanisms to hold the two together (e.g. optional attachment point 6108).

[000348] Figure 62 illustrates an embodiment where the expansion module 6102 hangs down in front of the HWC 102. In embodiments, the expansion module may sit on top, hang over a side or front of the HWC 102, or be otherwise mounted. In the illustrated embodiment of Figure 62, while the expansion module 6102 hangs down in front of the HWC 102, it still sits above a see-through area of the HWC 102. Figure 63 illustrates a few optional configurations for the expansion module 6102.

[000349] An aspect of the present invention relates to illumination of a person's eye with a pattern of light. The pattern of light is emitted from an optical system in a head- worn computer. The pattern of light is projected from the optical system such that the eye reflects predictable pattern of light, which is based on the projected pattern and the shape of the person' s eye. The reflected pattern can be captured by a camera in the head- worn computer and analyzed to determine a position or direction of the eye. In embodiments, the optical system is arranged to minimize the number of light sources, while still generating a pattern of light.

[000350] Figure 64 illustrates an optical system adapted to be mounted in a head-worn computer, positioned in front of the user' s eye, and further adapted to reflect light that is internally transmitted through the optic such that the reflected light forms the pattern of light for eye illumination. Optic 6402 includes a number of reflective features 6404. The optical system includes a light source 6406 (e.g. an IR LED). The light source 6406 projects light into the optic 6402. The light passes through the optic and when it reaches the reflective features 6404 the light reflects towards the eye. The pattern of light produced on the eye 6410 is then captured by a camera 6408 (e.g. IR camera). In embodiments, some of the light from the light source 6406 may be allowed to shine directly onto the eye to create another point in the pattern of light or act as a flood light to fill in the exposure of the eye so characteristics of the eye (e.g. iris, retina, pupil, as mentioned elsewhere herein) can also be imaged by the camera 6408 that is capturing the eye reflected pattern of light 6410.

[000351] Figure 65 illustrates several different example optical configurations for the reflection of image light and for the generation and distribution of eye illumination light. The direct system 6506 includes an optic 6402 that performs multiple functions. In this configuration the optic 6402 reflects image light 6504 directly to the eye and emits eye light. The direct system with separate eye illumination module 6508 includes a separate optic designed to emit the eye light 6502. In this configuration the optic 6402 may be a removable optic, a corrective optic, a protective optic, etc. The two lower optical configurations are referred to as indirect because the image light reflects away from the eye to an optical surface and then is reflected back to the eye. In the indirect system 6510 the optic 6402 performs the image light reflection and emits the eye light. In the indirect system 6512 the optic 6402 is a separate module that emits the eye light. In configurations 6508 and 6512 the optic 6402 is depicted as an example to be closer to the user' s eye but it may also be farther away or on the opposite side of the surface directing image light (example not shown in the drawings).

[000352] In embodiments, the optic 6402 may be a waveguide adapted to transmit (e.g. through total internal reflection "TIR" transmission) image light to the wearer's eye as well as transmit eye illumination light. The waveguide may include features that cause the eye illumination light to reflect towards the wearer's eye in a predictable pattern.

[000353] Figure 66 represents an alternate view of 6510 and illustrates an optical system in a head-mounted frame that includes internally reflected eye illumination light where the optic 6402 includes features (e.g. notches, reflective paint, reflective surface, etc.) 6404 to direct the internally reflected light out of the optic and towards the wearer' s eye. This embodiment includes an optic 6402 that performs multiple functions: reflecting image light from the image source 6602 away from the wearer' s eye towards an optic that is adapted to reflect the image light back through the optic 6602 and to the wearer's eye, and it is adapted to reflect internally reflected eye illumination light towards the wearer's eye. The optical system includes a camera 6408 adapted to capture the eye illumination that is reflected from the wearer's eye (e.g. an IR camera if the light source 6406 generates IR eye illumination light). The light source 6406 is positioned to inject light (e.g. IR) into an edge of the optic 6402 such that the light internally transmits through the optic 6402 (e.g. through internal reflections) until the light reaches the reflective features 6404, which reflect the light towards the wearer' s eye. In order to produce the desired image of the eye reflected pattern of light 6410 with the camera, the size, shape, angle and type of features 6404 may be adapted to compensate for their location on the optic according to the; angle the optic is tilted relative to the eye, distance between the feature and the light source 6406, distance between the feature and the expected eye location, distance between the expected eye location and the camera. For example, in Figure 66 the reflective features 6404 may consist of stripes of white paint on the bottom edge of the optic and angled groves on the top edge cut into the surface away from the eye and both features increase in size the farther away they get from the camera and light source to maintain both a consistent size and brightness in the reflected image captured by the camera.

[000354] Figure 67 illustrates a few optical configurations that may be used in the alternative or in combinations. As shown, the eye illumination pattern may be generated by features 6404 on a front lens or shield, or by features 6404 a combiner element, each of which may be considered an optic 6402. The front shield is physically mounted in front of image directive optics (e.g. combiner 6704). The front shield may be removable and replaceable for cleaning, exchange of lens types, etc. The eye illumination light may be transmitted to the eye around the combiner 6704 or through the combiner 6704. In embodiments, the optical system may also have a floodlight 6702. In the embodiment depicted in Figure 67, the floodlight 6702 is mounted in or near the image light production optics and is transmitted along an optical axis that is substantially in-line with the image light transmission axis. In embodiments, the floodlight 6702 may be mounted in a different location such that the light generally illuminates the eye or some of the light source 6406 light may be allowed to shine directly onto the eye to serve the parallel purpose of a flood light. An advantage to having a floodlight 6702 transmit into the image light optical axis is that it is more likely to provide uniform illumination of the characteristics of the eye and the light may then be directed into the pupil for retina illumination.

[000355] Figure 68 illustrates a direct style illumination system 6506. In this embodiment, the optic 6402 operates as both the eye illumination optic and the combiner that reflects image light directly to the eye and provides the see-through view of the user's surrounding environment. The two illustrations of Figure 68 illustrate different positions for the camera 6408. In the left illustration, the camera 6408 is mounted towards the top of the nose bridge 6802 or other location towards the top portion of the structure as indicated (e.g. such as in the lower surface of the glasses housing where it will be physically close to the other electronics in the glasses). In this higher location the camera can point downward towards the optic 6402 in order to capture reflected camera light 6603 that is coming from the eye in a downward angle and less likely to be blocked by the users upper eyelashes and more likely to be closer to the direction of the user's gaze when they are viewing image light 6504. The right illustration depicts the camera in a lower portion of the nose bridge 6802 or edge of the optic 6402 in order to capture light approximately the line of user' s gaze or slightly below to minimize obstructions from the user's upper eyelashes. The light form the light source 6406 may be injected into the optical element 6402 where it eventually exits off of a feature towards the eye. The light can then be reflected from the eye back onto the optic / combiner 6402, which may include a hot mirror on the front or back surface, then up to the camera which is integrated into the frame of the glasses. In embodiments, the location of the camera is selected so that the path of light from the eye into the camera is below the line of gaze from the eye to minimize eyelash interference.

[000356] Figure 69 illustrates an eye illumination system with various light source 6406 positions and reflective feature 6404 positions and angles. In the left version of Figure 69, the light source 6406 injects light into a side portion of the optic 6402 and the reflective features 6404 reflect the light towards the eye. Each of the reflective features 6404 are positioned and adapted to reflect the internal optic light at angles depending on the distance from the eye and angle of the optic 6402 with respect to the eye. As an illustration, the upper feature on the left illustration shows a reflective feature 6404 at approximately 45 degrees to the user' s eye (perpendicular to the plane of the optic 6402) to properly reflect the internal light out of the optic 6402 and towards the desired portion of the user' s eye. Whereas the lower reflective feature 6404 of the left illustration is relatively flat to the drawing (or roughly 45° to the plane of the optic 6402) properly reflect the light. The illustration on the right of Figure 69 shows the light source 6406 injecting light at the top of the optic 6402 and a lower relatively flat reflective feature 6404. In both illustrations it may be advantageous to let some of the light from the light source 6406 shine directly onto the eye to either flood the eye exposure in the camera so it is visible or to simply provide another point of light where each light source is located in the reflected pattern 6410.

[000357] Figure 70 illustrates an indirect optical system 6510, where the optic 6402 is used as both an eye illumination optic and a combiner. The combiner is this configuration reflects image light forward, away from the eye, towards a focusing element, which then reflects the image light back through the combiner towards the eye. In this configuration, the reflective features 6404 will be different from the configurations illustrated in Figure 69 due to the different angle of the optic 6402. As can be seen in Figure 70, the reflective features are angled to reflect the internally reflected light out to the proper position on the eye. The illustration on the right of Figure 70 shows the light source 6406 injecting light at the top of the optic 6402 and a lower relatively flat reflective feature 6404. In both illustrations it may be advantageous to let some of the light from the light source 6406 shine directly onto the eye to either flood the eye exposure in the camera so it is visible or to simply provide another point of light where each light source is located in the reflected pattern 6410.

[000358] Figure 71 illustrates a modular eye-imaging system adapted to be assembled as a head-worn computer. In embodiments, the optic 6402 is mechanically and electrically adapted to be removably and replaceably mounted to the head-worn computer. This allows, for instance, a head-worn computer with an accessory for eye imaging. In the embodiment illustrated in Figure 71, the nose bridge 6802 and optic 6702 may be modules to be mounted together or separately to the head-worn computer. The camera 6408 may be mounted on the nose bridge or other removable assembly. The optic 6402 may be a corrective optic adapted to correct the wearer's vision.

[000359] Figure 72 illustrates the side view of modular eye-imaging system adapted to be assembled as a head-worn computer. In some embodiments the camera may be located to image the eye directly as on the right side of Figure 72, while in other embodiments the camera is located to image off of the combiner element.

[000360] Another aspect of the present disclosure relates to an intuitive user interface mounted on the HWC 102 where the user interface includes tactile feedback (otherwise referred to as haptic feedback) to the user to provide the user an indication of engagement and change. In embodiments, the user interface is a rotating element on a temple section of a glasses form factor of the HWC 102. The rotating element may include segments such that it positively engages at certain predetermined angles. This facilitates a tactile feedback to the user. As the user turns the rotating element it 'clicks' through its predetermined steps or angles and each step causes a displayed user interface content to be changed. For example, the user may cycle through a set of menu items or selectable applications. In embodiments, the rotating element also includes a selection element, such as a pressure-induced section where the user can push to make a selection. [000361] Figure 73 illustrates a human head wearing a head-worn computer in a glasses form factor. The glasses have a temple section 7302 and a rotating user interface element 7304. The user can rotate the rotating element 7304 to cycle through options presented as content in the see-through display of the glasses. Figure 74 illustrates several examples of different rotating user interface elements 7304a, 7304b and 7304c. Rotating element 7304a is mounted at the front end of the temple and has significant side and top exposure for user interaction. Rotating element 7304b is mounted further back and also has significant exposure (e.g. 270 degrees of touch). Rotating element 7304c has less exposure and is exposed for interaction on the top of the temple. Other embodiments may have a side or bottom exposure.

[000362] Another aspect of the present disclosure relates to a haptic system in a head- worn computer. Creating visual, audio, and haptic sensations in coordination can increase the enjoyment or effectiveness of awareness in a number of situations. For example, when viewing a movie or playing a game while digital content is presented in a computer display of a head-worn computer, it is more immersive to include coordinated sound and haptic effects. When presenting information in the head-worn computer, it may be advantageous to present a haptic effect to enhance or be the information. For example, the haptic sensation may gently cause the user of the head-worn computer believe that there is some presence on the user' s right side, but out of sight. It may be a very light haptic effect to cause the 'tingling' sensation of a presence of unknown origin. It may be a high intensity haptic sensation to coordinate with an apparent explosion, either out of sight or in-sight in the computer display. Haptic sensations can be used to generate a perception in the user that objects and events are close by. As another example, digital content may be presented to the user in the computer displays and the digital content may appear to be within reach of the user. If the user reaches out his hand in an attempt to touch the digital object, which is not a real object, the haptic system may cause a sensation and the user may interpret the sensation as a touching sensation. The haptic system may generate slight vibrations near one or both temples for example and the user may infer from those vibrations that he has touched the digital object. This additional dimension in sensory feedback can be very useful and create a more intuitive and immersive user experience.

[000363] Another aspect of the present disclosure relates to controlling and modulating the intensity of a haptic system in a head-worn computer. In embodiments, the haptic system includes separate piezo strips such that each of the separate strips can be controlled separately. Each strip may be controlled over a range of vibration levels and some of the separate strips may have a greater vibration capacity than others. For example, a set of strips may be mounted in the arm of the head-worn computer (e.g. near the user's temple, ear, rear of the head, substantially along the length of the arm, etc.) and the further forward the strip the higher capacity the strip may have. The strips of varying capacity could be arranged in any number of ways, including linear, curved, compound shape, two dimensional array, one dimensional array, three dimensional array, etc.). A processor in the head-worn computer may regulate the power applied to the strips individually, in sub-groups, as a whole, etc. In embodiments, separate strips or segments of varying capacity are individually controlled to generate a finely controlled multi-level vibration system. Patterns based on frequency, duration, intensity, segment type, and/or other control parameters can be used to generate signature haptic feedback. For example, to simulate the haptic feedback of an explosion close to the user, a high intensity, low frequency, and moderate duration may be a pattern to use. A bullet whipping by the user may be simulated with a higher frequency and shorter duration. Following this disclosure, one can imagine various patterns for various simulation scenarios.

[000364] Another aspect of the present disclosure relates to making a physical connection between the haptic system and the user' s head. Typically, with a glasses format, the glasses touch the user's head in several places (e.g. ears, nose, forehead, etc.) and these areas may be satisfactory to generate the necessary haptic feedback. In embodiments, an additional mechanical element may be added to better translate the vibration from the haptic system to a desired location on the user' s head. For example, a vibration or signal conduit may be added to the head-worn computer such that there is a vibration translation medium between the head- worn computers internal haptic system and the user' s temple area.

[000365] Figure 75 illustrates a head-worn computer 102 with a haptic system comprised of piezo strips 7502. In this embodiment, the piezo strips 7502 are arranged linearly with strips of increasing vibration capacity from back to front of the arm 7504. The increasing capacity may be provided by different sized strips, for example. This arrangement can cause a progressively increased vibration power 7503 from back to front. This arrangement is provided for ease of explanation; other arrangements are contemplated by the inventors of the present application and these examples should not be construed as limiting. The head-worn computer 102 may also have a vibration or signal conduit 7501 that facilitates the physical vibrations from the haptic system to the head of the user 7505. The vibration conduit may be malleable to form to the head of the user for a tighter or more appropriate fit. [000366] An aspect of the present invention relates to a head-worn computer, comprising: a frame adapted to hold a computer display in front of a user's eye; a processor adapted to present digital content in the computer display and to produce a haptic signal in coordination with the digital content display; and a haptic system comprised of a plurality of haptic segments, wherein each of the haptic segments is individually controlled in coordination with the haptic signal. In embodiments, the haptic segments comprise a piezo strip activated by the haptic signal to generate a vibration in the frame. The intensity of the haptic system may be increased by activating more than one of the plurality of haptic segments. The intensity may be further increased by activating more than 2 of the plurality of haptic segments. In embodiments, each of the plurality of haptic segments comprises a different vibration capacity. In embodiments, the intensity of the haptic system may be regulated depending on which of the plurality of haptic segments is activated. In

embodiments, each of the plurality of haptic segments are mounted in a linear arrangement and the segments are arranged such that the higher capacity segments are at one end of the linear arrangement. In embodiments, the linear arrangement is from back to front on an arm of the head- worn computer. In embodiments, the linear arrangement is proximate a temple of the user. In embodiments, the linear arrangement is proximate an ear of the user. In embodiments, the linear arrangement is proximate a rear portion of the user's head. In embodiments, the linear arrangement is from front to back on an arm of the head- worn computer, or otherwise arranged.

[000367] An aspect of the present disclosure provides a head-worn computer with a vibration conduit, wherein the vibration conduit is mounted proximate the haptic system and adapted to touch the skin of the user' s head to facilitate vibration sensations from the haptic system to the user' s head. In embodiments, the vibration conduit is mounted on an arm of the head- worn computer. In embodiments, the vibration conduit touches the user's head proximate a temple of the user' s head. In embodiments, the vibration conduit is made of a soft material that deforms to increase contact area with the user's head.

[000368] An aspect of the present disclosure relates to a haptic array system in a head- worn computer. The haptic array(s) that can correlate vibratory sensations to indicate events, scenarios, etc. to the wearer. The vibrations may correlate or respond to auditory, visual, proximity to elements, etc. of a video game, movie, or relationships to elements in the real world as a means of augmenting the wearer's reality. As an example, physical proximity to objects in a wearer's environment, sudden changes in elevation in the path of the wearer (e.g. about to step off a curb), the explosions in a game or bullets passing by a wearer. Haptic effects from a piezo array(s) that make contact the side of the wearer' s head may be adapted to effect sensations that correlate to other events experienced by the wearer.

[000369] Figure 76 illustrates a haptic system according to the principles of the present disclosure. In embodiments the piezo strips are mounted or deposited with varying width and thus varying force Piezo Elements on a rigid or flexible, non-conductive substrate attached, to or part of the temples of glasses, goggles, bands or other form factor. The non-conductive substrate may conform to the curvature of a head by being curved and it may be able to pivot (e.g. in and out, side to side, up and down, etc.) from a person's head. This arrangement may be mounted to the inside of the temples of a pair of glasses. Similarly, the vibration conduit, described herein elsewhere, may be mounted with a pivot. As can be seen in Figure 76, the piezo strips 7502 may be mounted on a substrate and the substrate may be mounted to the inside of a glasses arm, strap, etc. The piezo strips in this embodiment increase in vibration capacity as they move forward.

[000370] In head- worn displays it is advantageous for the optics to be compact and low in weight to make the head- worn display more comfortable for the user. To this end, thinner optics are typically lower in weight. To provide a more immersive viewing experience, a wider display field of view is desirable. For augmented reality applications a large see-through field of view provides the user with an improved see-through view so the user feels more connected with the surrounding environment.

[000371] Figure 78 is an illustration of a cross section of optics with a folded optical path that provide excellent image quality because the wavefront is preserved throughout and there are no structures with multiple edges that tend to scatter light, such as Fresnel lenses, segmented reflectors or diffractive lenses. These optics include an image source 7810 that provides image light 7825 that passes through the optics to the eyebox 7815 where a user can view a displayed image comprised of the image light 7825 in a display field of view. A see- through view of the surrounding environment can also be provided comprised of see-through light 7729 in a see-through field of view, wherein the displayed image is seen by the user as overlaid on top of the see-through view of the environment. Bundles of rays of image light 7825 are shown in Figure 78 to illustrate how the light passes from the image source 7810 to the eyebox 7815 along a folded optical path. One or more lenses 7820 collect the image light 7825 and present it to a beam splitter 7855 plate that includes a first partially reflective surface to redirect a portion of the image light 7825 toward a curved partial mirror 7845 that includes a second partially reflective surface. A portion of the image light 7825 is then reflected by the curved partial mirror 7845 back toward the beam splitter 7855 which then transmits a portion of the image light 7825 so that it is presented to the eyebox 7815. The curve of the partial mirror 7845 presents the image light 7825 to the eyebox 7815 as a cone of light with an included angle indicated by the solid lines of the outermost rays 7827 that is the display field of view. The see-through field of view is limited by the edges of the various elements and is shown by the included angle between the dotted lines 7835. The multiply folded path of the image light 7825 between the image source 7810 and the eyebox 7815 greatly reduces the overall size of these optics. However, while these optics can provide excellent image quality and are relatively compact, there are opportunities to reduce the assembly cost, reduce the thickness, increase the display field of view and increase the see- through field of view.

[000372] Figure 77 is an illustration of a cross section of a new form of folded optics that improves on the optics shown in Figure 78. The optical path followed by the image light 7725 is similar to that followed by image light 7825 in that there are multiple folds between the image source 7710 and the eyebox 7715. See-through light 7729 can also be provided to the eyebox to provide the user with a see-through view of the surrounding environment. Bundles of rays of image light 7725 are shown in Figure 77 to illustrate how the light passes from the image source 7710 to the eyebox 7715 along a folded optical path. The major difference in the optics shown in Figure 77 is that surfaces of various optical elements are matched to one another so they can be are cemented together in a solid optical assembly 7705. Where the solid optical assembly 7705 includes at least the following elements: a field lens 7720, a power lens 7730, a prism 7750 and a front lens 7740. The field lens 7720 collects the image light 7725 provided by the image source 7710 and presents it to the power lens 7730. The field lens 7720 can have two optical surfaces that supply optical power provided by spherical or aspherical refractive surfaces. The power lens 7730 has an upper surface that is matched to the lower surface of the field lens 7720. The power lens 7730 also includes a first partially reflective surface 7755 that is piano and a second partially reflective surface 7745 that is curved (e.g. spherical or aspherical). A portion of the image light 7725 is reflected by the first partially reflective surface 7755 so that it is redirected toward the second partially reflective surface 7745 where a portion of the image light 7725 is reflected back toward the first partially reflective surface 7755. A portion of the image light 7725 is then transmitted through the first partially reflecting surface 7755 as it passes to the eyebox 7715. The curved shape of the second partially reflective surface 7745 supplies optical power to the image light 7725 thereby causing the image light 7725 to be presented to the eyebox 7715 as a cone of light with an included angle shown by the outermost rays 7727 of the image light 7725 that comprises the display field of view. Other surfaces in the solid optical assembly 7705 are matched to enable the various elements to be bonded together with transparent adhesive including: the front surface of the power lens 7730 and the back surface of the front lens 7740; the back surface of the power lens 7730 and the front surface of the prism 7750. The bondlines of transparent adhesive at the matched surfaces is typically 10-15 microns in thickness so that the bondlines have little effect on the image light 7725.

[000373] One advantage provided by the solid optical assembly 7705 is that the various elements included in the solid optical assembly 7705 (e.g. 7720, 7730, 7740 and 7750) can be separately manufactured and then cemented together to form a solid optical assembly 7705 as shown in Figures 79a and 79b. The solid optical assembly 7705 after being adhesively bonded together can be a robust preassembled optical unit that can be easily installed into a frame along with the image source 7710. The alignment of the various elements (e.g. 7720, 7730, 7740 and 7750) is rigidly held in place by a transparent cement between the surfaces between the various elements. Where the transparent cement used on the surfaces between the various elements, such as between the field lens 7720 and the power lens 7730, between the power lens 7730 and the front lens 7740, and between the power lens 7730 and the prism 7750, can be for example a UV curing adhesive, a two part adhesive or a thermal curing adhesive. The elements can be precisely held in alignment relative to one another by jigs, fixtures or robotic mechanisms while the transparent adhesive is cured in place (e.g. by heat or by ultraviolet light) to lock in the alignment. In a preferred

embodiment, the matched surfaces that are cemented together are spherical. After the various elements of the solid optical assembly 7705 have been cemented together, the solid optical assembly 7705 can be installed into a rigid frame of the head-worn display that holds the solid optical assembly 7705 precisely in position relative to an adjacent solid optical assembly 7705 so that left and right versions of image light 7725 can be respectively provided to left and right eyes of a user. Image sources 7710, can be positioned over the respective left and right solid optical assemblies 7705 and the image sources 7710 can be aligned relative to the respective solid optical assemblies 7705 to precisely position the left and right images for viewing by the left and right eyes of the user.

[000374] In the solid optical assembly 7705, the field lens 7720 is made from a different optical material than the power lens 7730, the front lens 7740 and the prism 7750. By using optical materials (either glass or plastic) with different refractive indices (e.g. > 0.05 different), a refractive effect supplying optical power can be provided across the curved interface between the field lens 7720 and the power lens 7730. For example, the field lens 7720 can be made from a material with a higher refractive index such as for example polycarbonate (1.59), polystyrene (1.58) or OKP4 (1.61) and the power lens can be made from a material with a lower refractive index, such as for example acrylic (1.49) or Zeonex (1.53). As such the solid optical assembly 7705 includes multiple internal optical surfaces including at least one refractive surface between the field lens 7720 and the power lens 7730 and two or more reflective surfaces between the power lens 7730 and the prism 7750 and between the power lens 7730 and the front lens 7740.

[000375] To provide for undistorted see-through, it is important that the materials for all the elements through the horizontal thickness, at the user's see-through view of the surrounding environment, of the solid optical assembly 7705 have the same or at least very similar refractive index (e.g. within < 0.05) so that the solid optical assembly 7705 appears as a solid optical plate or window when the user is looking at the see-through view of the surrounding environment. As an example, the power lens 7730, the front lens 7740 and the prism 7750 can all be made of materials that have a very similar refractive index (e.g. within 0.005 refractive index units) so the see-through light 7729 passes through the solid optical assembly without being distorted. The field lens 7720 can be made of a material that has a higher refractive index to provide a refractive effect when combined with the power lens 7730, but the dimensions of the field lens 7720 are selected to provide planar front and back surfaces that are adjacent to and coplanar with the front and back surfaces of the lower optical elements including the power lens 7730, the front lens 7740 and the prism 7750, so the solid optical assembly 7705 appears to be a solid optical plate. Because the field lens 7720 extends through the thickness of the solid optical assembly 7705, and the power lens 7730, front lens 7740 and prism 7750 together extend through the thickness of the solid optical assembly 7705 an undistorted (e.g. distortion < 0.5 degree) see-through view is provided to the user when looking through the field lens and when looking through the lower optics after the various elements have been cemented together with transparent adhesive.

[000376] Another advantage provided by the solid optical assembly 7705 is that the accuracy required in the various elements (e.g. 7720, 7730, 7740 and 7750) can be reduced. This is accomplished by using a transparent adhesive that has a refractive index that is very similar or index matched (e.g. within 0.05 index units) to the material of one of the elements such as the field lens 7720, the power lens 7730, the front lens 7740 and the prism 7750. Optically speaking, the transparent adhesive then becomes part of the element because the adhesive is index matched to the material of the element. The surface between the elements then becomes defined by either the surface of the element that has a different refractive index or by a partially reflective coating applied to the surface of one of the elements. As such only one side of each matched surface needs to be optically accurate while the mating surface does not need to be optically accurate. For example the lower surface of the field lens 7720 can have an accuracy of < 5 microns while the upper surface of the power lens 7730 can have an accuracy of < 30 micron if a transparent adhesive is used to bond the elements together that is index matched to the material of the power lens 7730. In the case of partially reflective coatings, the coating is applied to an accurate surface to provide improved optical performance. The mating surface then does not need to be very accurate provided the transparent adhesive is index matched to the mating surface so that any irregularities and inaccuracies of the mating surface are filled in by the transparent adhesive. As a result, the number of surfaces that need to be highly accurate is substantially reduced thereby increasing the yield during manufacturing and consequently reducing the manufacturing cost of the various elements. For example for the solid optical assembly 7705, there are four optical surfaces that need to be precise (e.g. within 5 microns of the desired surface geometry) to provide excellent image quality: the upper surface of the field lens 7720, the surface between the field lens 7720 and the power lens 7730, the surface between the power lens 7730 and the front lens 7740 and the surface between the power lens 7730 and the prism 7750. The accuracy of the mating surfaces to the internal accurate surfaces can be substantially reduced (e.g within 10-40 microns depending on whether the surface is respectively an external see- through surface or an internal cemented surface). In addition, since the first and second partially reflective surfaces (7755 and 7745 respectively), are internal to the solid optical assembly 7705, these precise optical surfaces are respectively protected from damage during use by the front lens 7740 and the prism 7750. In addition, the accurate surfaces can be positioned on different elements if that provides a manufacturing advantage since the surfaces are matched between elements. For example, the first partially reflective surface and its associated partially reflective coating can be placed on either the lower surface of the power lens 7730 or the upper surface of the prism 7750, and the second partially reflective surface and its associated partially reflective coating can be positioned on either the front surface of the power lens 7730 or the rear surface of the front lens 7740. Similarly the accurate surface between the power lens 7730 and the field lens 7720 can be provided by the upper surface of the power lens 7730 or the lower surface of the field lens 7720, however in this case, since the refractive indices of the two elements are different this accurate surface provides a refractive effect, the index matching adhesive is chosen to match the element that does not provide the accurate surface so the adhesive fills in the inaccuracies of the surface. [000377] Yet another advantage provided by the solid optical assembly 7705 is that the see-through field of view can be substantially increased. As shown in Figure 77 and as previously described herein, the solid optical assembly 7705 can be comprised of two different optical materials, wherein the field lens 7720 has one refractive index and the other various elements (7730, 7740 and 7750) all have very similar refractive indices that are different from the refractive index of the field lens 7720. Since the field lens 7720 shares the same front surface as the front lens 7740 and the same back surface as the prism 7750, the solid optical assembly 7705 appears to the user as a solid optical plate window with little see- through distortion. As a result, the user can see through both the field lens 7720 and the other various elements (7730, 7740 and 7750) and the see-through field of view then encompasses the entire front surface of the solid optical assembly 7705 as shown by the dotted lines 7735. By comparing the subtended angle of the dotted lines 7735 shown in Figure 77 to the subtended angle of the dotted lines 7735 shown in Figure 78, it can be readily seen that the solid optical assembly 7705 provides a much greater vertical see-through field of view than the embodiment shown in Figure 78 because the vertical see-through angle of the

embodiment shown in Figure 78 is limited by the lower surface of the lens 7820 where the refractive index through the thickness changes substantially whereas the see-through angle of the solid optical assembly 7705 can encompass all of the various elements including the field lens 7720. As such the vertical see-through field of view can be substantially larger than the display field of view in the solid optical assembly 7705. The front lens 7740 and the prism 7750 are designed in conjunction with the power lens 7730 to provide a uniform thickness plate when cemented together so the see-through light 7729 is not distorted as it passes to the eyebox 7715. The field lens 7720 is then designed so that the lateral dimension matches the combined thickness of the power lens 7730, the front lens 7740 and the prism 7750. In this way, the solid optical assembly 7705 comprises a uniform thickness plate of optical material with piano front and back surfaces so the user is provided an undistorted see-through view of the surrounding environment.

[000378] A further advantage provided by the solid optical assembly 7705 is that the optics can be substantially thinner than the embodiment shown in Figure 78. This is because the image light 7725 is contained within the optical material of the solid optical assembly 7705 so that a refractive effect occurs as the image light 7725 exits from the back of the solid optical assembly 7705 as it passes from the high refractive index material of the solid optical assembly 7705 to the low refractive index air on its way to the eyebox 7715. This can be seen as a change in angle of the outermost rays 7727 of the image light 7725 where they pass from the back surface of the prism 7750 into the air on the way to the eyebox 7715. As such, the subtended angle of the outermost rays 7727 of the image light 7725 is reduced inside the material of the solid optical assembly 7705. The reduced subtended angle of the outermost rays 7727 of the image light 7725 enables the radius of curvature of the second partially reflective surface 7745 to be increased and still provide the desired subtended angle of the outermost rays 7727 of the display field of view. Thus the reduced subtended angle enables a reduced thickness of the optics for a given display field of view. Figure 81 is a magnified portion of Figure 77 wherein the change in the subtended angle between the outermost rays 7727 of the image light 7725 can be better seen where they pass from the back surface of the prism 7750 into the air on their way to the eyebox 7715. Internal to the solid optical assembly 7705, the subtended angle between the outermost rays 7727 is reduced compared to the subtended angle in the air and as a result, the footprint (area covered by) of the ray bundles of the image light 7725 is reduced in size at the second partially reflective surface 7745 and at the first partially reflective surface 7755. This reduction in footprint of the ray bundles of the image light 7725 along with the reduced sag of the increased radius of curvature of the second partially reflective surface 7745 provides a reduction in the thickness of the solid optical assembly 7705 as measured from the front to the back (right to left as shown in Figure 77). By comparison, the subtended angle of the outermost rays 7827 of the image 7825 in the optics shown in the embodiment depicted in Figure 78is constant between the eyebox 7815 and the curved partial mirror 7845 and as a result, the thickness of the optics is increased relative to what is provided by the solid optical assembly 7705. Consequently by positioning the first and second partially reflective surfaces (7755 and 7745) internal to the solid optical assembly 7705, the subtended angle of the image light 7725 is reduced relative to the display field of view and the footprints of the image light 7725 at the first and second partially reflective surfaces are correspondingly reduced thereby enabling a reduction in thickness of the solid optical assembly 7705. For example optics of the type shown in Figure 78 can be 14mm thick while solid optics of the type shown Figure 77 can be 11mm thick for the same field of view thereby reducing the thickness of the optics by 22%.

[000379] In embodiments, the solid optical assembly 7705 is a solid block comprised of two optical materials with at least one internal refractive surface and at least two internal reflective optical surfaces. Wherein the solid optical assembly 7705 maintains the wavefront of the image light 7725 throughout the optics to provide improved image quality in the displayed image presented to the user. The front and back surfaces of the solid optical assembly 7705 can both be piano so that an undistorted see-through view of the surrounding environment can be provided that is transmitted through the entire front surface of the solid optical assembly 7705 thereby providing a larger vertical see-through field of view. The piano front and back surfaces of the solid optical assembly 7705 also provide for easier cleaning of the solid optical assembly 7705 for improved viewing of the displayed image and the see-through view of the surrounding environment.

[000380] In embodiments, the curved surface of the second partially reflective surface 7745 can be replaced by a flat holographic surface that has optical power. The flat holographic surface with optical power can be positioned to be at the front surface of the solid optical assembly 7705 thereby making the front lens 7740 unnecessary and further reducing the overall thickness, or the flat holographic surface with optical power can be positioned internal to the solid optical assembly 7705 with a uniform thickness front lens 7740. Where the flat holographic surface provides the same optical power as the curved surface of the second partially reflective surface 7745.

[000381] In embodiments, features are added to the various elements to enable the elements to self align relative to each other during the cementing process. While spherical and aspherical surfaces do tend to align with each other when mating surfaces are brought together this alignment is largely in regard to the decenter and the Z position of the mating surfaces and not in regard to tilt or rotational alignment between the mating surfaces. As such, the features can include complimentary tapered structures or beveled structures with mating slots or grooves, so the elements are guided into position as they are pressed together to reduce tilt and rotational misalignment between surfaces. The features are preferably located at the sides of the elements so the thickness of the solid optical assembly 7705 is not increased. Alternatively, features can be located at the front or back of the elements and the features can be removed (e.g. by machining or cutting) from the solid optical assembly 7705 after cementing is completed.

[000382] Figure 80 shows an example of coatings that can be applied to the solid optic assembly 7745. Black coatings such as black paint can be applied to portions of the sides of the field lens 7720 to reduce stray light associated with image light 7725 that reflects off the internal sidewalls of the field lens 7720. Black coating can also be applied to the bottom surface of the prism 7750 to prevent image light 7725 that passes through the first partially reflective surface 7755 from escaping from the solid optic assembly 7705. Black coating on the bottom surface of the prism 7750 also prevents stray light from the environment below the head-worn display from being transmitted upward into the prism 7750 where it can be reflected by the first partially reflective surface back toward the eyebox 7715 thereby interfering with the displayed image seen by the user. The black coatings are indicated by heavy lines in Figure 80. Antireflective coating can be applied to the front and back surfaces of the solid optics assembly 7705 as indicated by the dotted lines in Figure 80. The black coatings and the antireflection coating can be applied to the solid optical assembly 7705 after the various elements have been cemented together to reduce the number of coating runs needed and thereby reduce coating costs. The first partially reflective surface 7755 and the second partially reflective surface 7745 are coated with partially reflective coatings as indicated by dashed lines in Figure 80. Where the partially reflective coating may not be the same on these two internal surfaces. In embodiments, the first partially reflective surface 7755 and the second partially reflective surface 7745 can be coated with simple partial mirror coatings that reflect substantially all of the visible wavelengths equally (e.g. 50%

reflectivity). Alternatively, at least one of the first or second partially reflective surfaces (7755, 7745) can be coated with a notch mirror coating that has a higher reflectivity for wavelength bands included in the image light 7725 as provided by the image source 7710 and has a higher transmitivity for visible wavelengths not included in the wavelength bands included in the image light 7725. Preferably, the notch mirror coating has a reflectivity of > 50% for wavelength bands included in the image light 7725 and has a transmitivity of > 50% for visible wavelengths not included in the wavelength bands of the image light 7725. In a further preferred embodiment, the notch mirror coating reflects a majority of selected wavelength bands of image light to provide a bright displayed image while simultaneously transmitting a majority of the visible light between the selected wavelength bands to provide a bright see-through view of the surrounding environment. As previously described herein, the partially reflective surface for the first partially reflective surface 7755 can be applied to either the lower surface of the power lens 7730 or the upper surface of the prism 7750 and the partially reflective surface for the second partially reflective surface 7745 can be applied to either the front surface of the power lens 7730 or the back surface of the front lens 7740. The notch mirror coating can be applied to a surface of a plastic element. In a preferred embodiment, an element of the solid optical assembly is glass element (e.g. the front lens 7740) and the notch mirror coating is applied to a surface of the glass element. Alternatively, a notch mirror multilayer film (such as is described in United States Patent 7851054) can be applied at an interface between elements and adhesively bonded into place.

[000383] In embodiments, the solid optical assembly 7705 is coated with black absorbing material on the sides and bottom of the solid optical assembly 7705 to reduce glinting reflections of see-through light 7729 from the non-optical surfaces of the solid optical assembly 7705. By applying the black to the sides and bottom of the solid optical assembly 7705, the see-through view is not significantly blocked while eliminating the glinting reflections substantially improves the viewing experience. The solid optical assembly 7705 can also be made wider or taller than is needed for displaying the image to the user to position the sides and bottom of the solid optical assembly 7705 further away from the user' s line of sight where any artifacts caused by these non-optical surfaces are less noticeable.

[000384] In embodiments the geometry of the solid optical assembly 7705 can be different from that shown in Figures 77, 79a and 79b wherein the curved optical surface of the second partially reflective surface 8255 is positioned at the bottom of the solid optical assembly 8205 as shown in Figure 82. Where the solid optical assembly 8205 includes at least one upper lens 8220 which can include a field lens, a central prism element 8225 with a curved surface shared with the upper lens 8220, and a lower prism element 8230 that includes the curved surface associated with the second partially reflective surface 8255. The upper prism element 8250 and the lower prism element 8230 are made from materials with the same refractive index within < 0.05 and adhesively bonded together with a transparent index matched adhesive. The material of the upper lens 8220 has a refractive index that is different from that of the upper prism element 8250 and the lower prism element 8230 (e.g. at least 0.05 greater) so that a refractive effect is supplied to the image light 8225 as it passes from the upper lens 8220 to the upper prism element 8250. The upper lens 8220 is adhesively bonded to the upper prism element 8250 with a transparent index matched adhesive, where the adhesive can be index matched to either the material of the upper lens 8220 or the material of the upper prism element 8250. The various elements included in the solid optical assembly 8205 together form a uniform thickness block that provides an undistorted see- through view of the surrounding environment. In addition, the central prism element 8225 and the lower prism element 8230 can be designed to be the same shape to reduce manufacturing cost. The second partially reflective surface 8255 is coated to make the surface a reflective surface that supplies optical power to the image light 8225. The first partially reflective surface 8245 can be coated such as with a partially reflective dielectric coating (e.g. 20 to 50% reflectivity and 80 to 50% transmission), wherein the coating can be applied to either the lower surface of the upper prism element 8250 or the upper surface of the lower prism element 8230. Image light 8225 from the image source 7710 passes through the upper lens 8220 and the upper prism element 8250. A portion of the image light 8225 is transmitted by the first partially reflecting surface 8245. The image light 8225 then passes through the lower prism element 8230 until it is incident on the second partially reflecting surface 8255 where it is reflected by the curved surface which supplies optical power to the image light 8225 thereby providing a cone of image light 8225, which forms the display field of view, to the eyebox 7715. By positioning the curved surface of the second partially reflective surface 8255 at the bottom of the solid optical assembly 8205, the see-through light 7729 from the surrounding environment no longer has to pass through the second partially reflective surface 8245 thereby enabling the see-through transmission to be increased (e.g. > 50% transmission). In addition, since the see-through light 7729 doesn't pass through the second partially reflective surface 8255, the curved surface of the second partially reflective surface 8255 can be coated with a full mirror coating (e.g. >90% reflectivity for visible light) to provide increased efficiency. However, the thickness of the solid optical assembly 8205 is increased in this geometry because the ray bundles of the image light 8225 are diverging in the longer vertical portion of the solid optical assembly 8205 thereby increasing the footprint of the ray bundles of the image light at the second partially reflective surface 8255, which causes the horizontal thickness of the solid optical assembly 8205 to be larger than the solid optical assembly 7705. However the principles and advantages of making a pre-assembled solid optical assembly 8205 in this geometry apply similarly as previously described herein.

[000385] In embodiments, a solid optical assembly can be used with additional separate optical elements to provide an increased display field of view. Figure 83 is an illustration of a solid optical assembly 8305 with an additional separate optical element 8320. In this embodiment, a prism 7750, a power lens 7730 and a front lens 7740 all made with materials that have the same or very similar refractive indices are cemented together as previously described herein. A middle element 8322 made from a material that has a different refractive index is cemented to the power lens 7730 to provide a solid optical assembly 8305 that is similar to what has been described previously herein with a see-thru view provided to the user wherein scene light from the surrounding environment can pass through all of the elements that are cemented together, thereby providing a greater vertical see-through field of view. A separate optical element 8320 (shown as a field lens in Figure 83, but other optical elements and multiple optical elements are also possible) is then positioned between the middle element 8322 and the image source 7710. By adding another optical element, further control over the image light 7725 is enabled so that the performance of the head-worn display can be improved, such as increasing the display field of view (e.g. > 35 degrees or 40 degrees or greater), or increasing the sharpness (MTF) in the displayed image seen by the user. An air gap can separate the separate optical element 8320 and the solid optical assembly 8305 to enable a greater refractive effect on the image light. To position the separate optical element 8320 in relation to the other elements of the solid optical assembly 8305, features can be attached or manufactured as part of adjacent elements to align the separate optical element 8320 relative to the solid optical assembly as they are being assembled. Figure 83 shows an example of alignment features 8365 and 8367 wherein feature 8367 is a cylindrical pin that fits into feature 8365, which is a tapered slot. Feature 8365 can be molded as part of the middle element 8322 or accurately attached to the middle element 8322 using a jig. Similarly, feature 8367 can be molded as part of the separate optical element 8320 or accurately attached to the separate optical element 8320 using a jig. The features 8365 and 8367 align the separate optical element 8320 relative to the middle element 8322 by reducing the lateral tilt and rotation about the optical axis. Other features can be added to reduce other alignment inaccuracies. Different types of mating features are possible such as matching tapered surfaces or matching flanges between the separate optical element 8320 and the middle element 8322. Figure 38 shows another example of features 8465 and 8467 that can be used to align the separate optical element 8320 relative to the other elements of the solid optical assembly 8305 and hold it in position during assembly. Features 8465 and 8467 are shown as being wider to aid in preventing tilt across the narrow dimension of the separate optical element 8320 and also to enable the surfaces to be adhesively bonded together during assembly in a way that preserves the air gap between the separate optical element 8320 and the middle element 8322. While the features 8365 and 8465 are shown as being associated with the middle element 8322, they can also be associated with the power lens 7730 or other elements. Figure 38a shows a further illustration of a special flange 8421 associated with the separate optical element 8420 (alternatively the special flange can be associated with the middle element 8322 or the power lens 7730) where the flange 8421 positions the separate optical element 8420 relative to the middle element 8422 and the power lens 7730. The special flange 8421 supports the separate optical element 8420 across the ends or around the edges of the separate optical element 8420 to thereby accurately establish the air gap 8424 between the separate optical element 8420 and the middle element 8422. The special flange can be adhesively bonded into place after positioning the separate optical element 8420 in relation to the middle element 8422 and the power lens 7730. The special flange 8421 can seat onto the upper surface of the power lens 7730 (as shown in Figure 38a) or onto a surface of the middle element 8422 (not shown). In addition, the special flange 8421 can include tapered features 8423 that mate with corresponding features at the edge of the middle element 3802 so that the separate optical element 8420 is physically aligned relative to the middle element 8422 as the lenses are assembled. In the case where the special flange extends all the way around the edge of the middle element 8422, the special flange 8421 can provide a further benefit of keeping dust out of the air gap 8424. By providing a special flange 8421 to the separate optical element 8420 and adhesively bonding the special flange 8421 to the middle element 8422 or the power lens 7730, an extended solid optical assembly 8405 is provided with improved control over the image light so that a wider field of view or improved image quality (e.g. increased sharpness) is possible. Where the extended solid optical assembly has three internal refractive surfaces: the surface between the bottom of the separate optical element 8420 and air in the airgap 8424; the surface between the air in the airgap 8424 and the upper surface of the middle element 8422; the surface between the bottom of the middle element 8422 and the top of the power lens 7730. Likewise, alignment features can be added to other elements to provide alignment with adjacent elements in the extended solid optical assembly 8405.

[000386] In embodiments, the front lens 7740 can be made from a material (e.g. glass) with a substantially different thermal expansion coefficient from the power lens 7730 (e.g. plastic) and to allow the two elements to expand differently the two elements can be physically held together without being cemented. As a result, there can be a tiny air gap (e.g. 10 microns or less) between the elements, or the gap can be filled with an index matched liquid such as an oil. To prevent spurious reflection artifacts from occurring at the interface, the front surface of the power lens 7730 is coated with an antireflection coating and the back surface of the front lens 7740 is coated with a partially reflective coating as previously described herein. Features can be added to the frame of the head- worn display to physically hold the front lens 7740 against the power lens 7730. Preferably the matched surface between the power lens 7730 and the front lens 7740 is spherical so that alignment between the two elements is not critical provided that contact is maintained between the surfaces of the two elements. Since the gap between the elements is tiny, light from the surrounding environment is essentially unaffected by the gap so that the user is provided with a see-thru view that is substantially limited by the first and second partially reflective surfaces alone.

[000387] In embodiments, a corrective ophthalmic element can be attached to the back surface of the solid optical assembly. Wherein the corrective ophthalmic element is designed to provide the optical characteristics of the ophthalmic prescription of the user. Figure 39 is an illustration of a solid optical assembly 7705 as seen from above. In this Figure, the solid optical assembly 7705 can be seen to have flat front and back surfaces (the front surface of the front lens 7740 is shown at the top and the back surface is shown at the bottom) with a uniform combined thickness to provide an undistorted see-through view of the surrounding environment. Figure 87 is an illustration of a solid optical assembly 7705 with a corrective ophthalmic element 8780 shown attached to the back surface of the solid optical assembly 7705. Wherein the corrective ophthalmic element 8780 can be physically held against the back of the solid optical assembly 7705 and aligned relative to the solid optical assembly 7705 by mechanical features (not shown) on the sides or associated with the frame, or the corrective ophthalmic element 8780 can be aligned relative to the solid optical assembly 7705 and then adhesively bonded to the back surface of the solid optical assembly 7705. Where the alignment of the corrective ophthalmic element 8780 relative to the solid optical assembly 7705 can be provided by interlocking features associated with the solid optical assembly 7705 and the corrective ophthalmic element 8780. By positioning the corrective ophthalmic element 8780 at the back surface of the solid optical assembly 7705 and aligned with the optics of the solid optical assembly 7705, the user's view of both the displayed image and the see-through view of the surrounding environment are improved by adding the optical characteristics (for example: diopter power, astigmatism, wedge) associated with the user's ophthalmic prescription. As such the corrective ophthalmic element 8780 can be provided with the specific ophthalmic prescription of the user or can be provided with a general ophthalmic prescription such as diopter power alone.

[000388] In embodiments, the corrective ophthalmic element can be mechanically or magnetically held onto the back of the solid optical assembly by a holder with features that clip or snap onto the solid optical assembly. Figure 89 is an illustration of a corrective ophthalmic element 8780 mounted in a holder 8981, wherein the holder 8981 includes mounting features 8982 that clip into corresponding mounting features 8983 in the solid optical assembly 8905. The field lens 8920 can be modified to include flat flanges at the edges of the field lens 8920 as shown for example in Figure 89 where the mounting features 8983 are depressions in the solid optical assembly 8905 so the mounting features 48982 in the holder 8981 can clip in. When the holder 8981 is clipped into the features 8983 of the solid optical assembly 8905, the holder 8981 can be rigidly held into position and the corrective ophthalmic element 87080 can be rigidly held in alignment relative to the optics of the solid optical assembly 8905. The corrective ophthalmic element 87080 can be physically mounted into a pocket in the holder 8981 or it can be adhesively bonded into a pocket in the holder 8981. The features 8982 and 8983 can be located on the sides, top or bottom of the holder 8981 and the solid optical assembly 8905 as long as the features are located in corresponding locations so the features can clip into one another. By clipping onto the edges of the solid optical assembly 8905 so the corrective ophthalmic element 8780 is held against the back of the solid optical assembly 8905, the thickness of the corrective ophthalmic element 8780 can be reduced and the thickness of the head-worn display can also be reduced. In embodiments, magnets or mechanical features may be designed into the HWC frame that is holding the solid optic. For example, the optic may be mounted and secured in the frame of the HWC and a slot, magnet and/or other feature may be mounted in the frame such that when the corrective optic can be snapped or clipped in place by a user.

[000389] In embodiments, the solid optical assembly can be provided with curved front and back surfaces to improve the form factor. Figure 86 is an illustration that shows a curved version of a solid optical assembly 8605 as seen from above wherein the front and back surfaces have concentric curves. As a result, the see-through thickness as measured along the line of sight from the user' s eye, is uniform to provide an undistorted see-through view of the surrounding environment as the user's eye moves around in the see-through field of view. By providing a curved geometry of the solid optical assembly 8605, the solid optical assembly 8605 can be made to fit more compactly into a frame that has a curved geometry thereby enabling a thinner form factor of the head- worn display, such as for example a frame that wraps around the head of the user.

[000390] In embodiments, where elements in the solid optical assembly 7705 or 8605 are made of different materials that have different thermal expansion coefficients, an index matched optical gel can be used at the interface between the elements instead of an adhesive. Where the optical gel has characteristics of a solid and a liquid over the operating range of the head- worn display (e.g. -20 to 80 degrees C) so that the optical gel stays at the interface with reduced migration, while also allowing some movement at the interface as the elements expand and contract as the temperature of the head-worn display changes. An example of an index matched optical gel is available from Thor Labs, Newton NJ as product #G608N3 with a refractive index of 1.46. An example of different materials that would benefit from the optical gel is if the front lens 7740 is made of a glass such as Schott N-FK5 with refractive index of 1.487 and a thermal expansion coefficient of 9.2E-6/degree C and a power lens 7730 made of acrylic with a refractive index of 1.49 so the two materials are index matched and a thermal expansion coefficient of 9E-5/degree C so that the power lens 7730 has a

substantially higher thermal expansion that the front lens 7740. By using a flexible optical gel at this interface instead a rigid optical adhesive, distortion of the elements caused by thermal stress is greatly reduced and the index matched bondline can be maintained and as a result image quality is improved over the operating range of the head-worn display. [000391] In embodiments, the sides and bottom of the solid optical assembly (7705 or 8305) can be flared to better match the see-through line of sight of the user and thereby reduce the interference of the see-through view of the surrounding environment caused by the sidewalls and bottom. As a result, the area of the front surface of the solid optical assembly (7705 or 8305) is larger than the area of the back surface. Figure 88 is an illustration of a solid optical assembly 8805 shown from above wherein the front surface is shown on the top and the back surface is shown on the bottom. The front surface is larger than the back surface so that the sides of the soli optical assembly 8805 are flared outward toward the front surface. The sidewalls then more closely follow the user' s line of sight so that the sidewalls are less noticeable to the user when viewing the see-through view of the surrounding environment.

[000392] In embodiments, the solid optical assembly of Figure 82 can include a polarizing beam splitter layer. The image light 8225 can be polarized either by adding a polarizer at the image source 7710 if the image source 7710 is an emissive display or, if the image source 7710 is a reflective display, such as for example an LCOS, the illuminating light (e.g. from a frontlight, not shown) incident onto the image source 7710 can be polarized as supplied and then analyzed after reflection, as is known by those skilled in the art. The polarization state of the image light 8225 can be selected in conjunction with a polarizing beam splitter layer which is the first partially reflective surface 8245, so that the polarized image light is substantially transmitted by the polarizing beam splitter layer. The lower prism element 8230 is comprised of two pieces, an upper prism piece with piano surfaces and a lower piano/convex piece that together form the shape of the lower prism element 8230 shown in Figure 82. A quarter wave film with it's fast axis oriented at 45 degrees to the polarization axis of the image light 8225, is positioned between the upper prism piece and the lower piano/convex piece and adhesively bonded into place. The polarized image light 8225 then has its polarization state changed by 90 degrees as it passes through the quarter wave, is reflected by the second partially reflective surface 825 and passes back through the quarter wave. Because the polarization state has been changed by 90 degrees, the image light 8225 is then reflected and redirected by the polarizing beam splitter layer so that it is exits from the back surface of the lower prism element 8230 on its way to the eyebox 7715. The advantage of using a first partially reflective surface 8245 that is a polarizing beam splitter layer, is that the polarized image light 8225 from the image source 7710 is substantially transmitted by the polarizing beam splitter layer, while the polarized image light 8225 that has been reflected by the second partially reflective surface 8255 and altered by passing twice through the quarter wave is substantially reflected by the polarizing beam splitter layer. As a result, very little image light 8225 is lost during the transmission or reflection. Consequently, very little image light 8225 exits through the front surface of the upper prism element 8250 where it would be visible by other people in the surrounding environment as a miniature version of the displayed image, also known as eyeglow.

[000393] In embodiments, the upper lens 8220 of Figure 82 is comprised of two or more refractive elements made from materials with at least two different refractive indices (e.g. > 0.05 difference) so that refractive effects are provides to the image light 8225 as it passes between elements. The two or more refractive elements are adhesively bonded together by transparent index matched adhesive. As a result, the solid optical assembly 8205 includes at least two internal refractive surfaces and at least one internal partially reflective surface. By providing additional refractive elements in the solid optical assembly 8205, a wider display field of view can be provided (e.g. 40 degrees or greater). The various elements included in the solid optical assembly 8205 are designed to provide a uniform thickness to provide an undistorted see-through view of the surrounding environment. 051716 Narrow notch mirror combiner.

[000394] Another aspect of the present inventions relates to the optimization of image light transfer to the user's eye and scene light transmission to the user's eye. In

embodiments, notch mirrors/filters are used to reflect the image light while transmitting much of the scene light.

[000395] In a head-worn computer or head-worn display that displays a projected image while also providing a user with a see-through view of the surrounding environment, it can be advantageous to include a combiner that has a notch mirror. Where the notch mirror has bands of high reflectivity separated by bands of low reflectivity and high transmission. The bands of high reflectivity are designed to be spectrally positioned to correspond with the emission bands provided by the image source and the associated image light, so that the image light is efficiently reflected by the combiner, to deliver the image light to the user' s eye. At the same time, the bands of high transmission enable light from the environment to be efficiently transmitted by the combiner to the user' s eye, to provide a see-through view of the surrounding environment. The user then sees a displayed image, comprised of image light, overlaid onto a see-through view of the surrounding environment, comprised of scene light. However, the see-through view of the environment can be degraded by the notch mirror, because certain colors in the environment are blocked by the bands of high reflectivity of the notch mirror. While the color blocking of the notch mirror typically does not substantially affect the viewing experience of broad band colors, such as are found in nature, color blocking can be an issue for narrow band lights in the environment such as LEDs that are used for different illumination applications. For example, it can be important to be able to see the red color associated with warning lights such as traffic lights and brake lights in the see-through view if the user is driving a car. As a result, the inventors appreciated that there is an opportunity to provide an improved notch mirror system in a head- worn computer that provides a bright displayed image while still providing a high quality see-through view of the surrounding environment, particularly if the surrounding environment includes lights such as LEDs or other lights spectrally similar to the image light.

[000396] Figure 90 is a chart showing a typical emission spectrum, showing relative intensity vs wavelength, for an LED module that includes red, green and blue LEDs. The data is shown for a multi-LED module LRTB GFTG from OSRAM, Regensburg Germany. LED's such as this can be used to illuminate the image source in a head-worn display that includes a reflective display such as LCOS, FLCOS or DLP, or alternatively LED's can be used to illuminate a backlight for a transmissive display such as backlit LCD. As can be seen from the blue, green and red emission spectrum shown as 9015, 9017 and 9019 respectively, the full width half max (FWHM, which is the nm width of the emission curve taken at 50% of the peak relative intensity) bandwidth associated with the LEDs can be 42nm for the blue LED, 64nm for the green LED and 25nm for the red LED. For image light that originates from LED illumination, the spectrum of the image light that comprises the image viewed by the user is the combined spectra of the blue, green and red LEDs that are shown in Figure 90. Emissive displays can also be used as the image source in head-worn displays including OLED and micro-LED, where emissive displays such as OLED provide a spectrum that is similar to the combined spectra provided by blue green and red LEDs.

[000397] Figure 91 shows an illustration of reflectivity provided by a notch mirror with three 90% reflectivity bands, with 10% transmission, for blue, green and red shown respectively as 9115, 9117 and 9119, that match the FWHM emission bands (42nm + 64nm + 25nm = 131nm total reflectivity) of the LEDs 9015, 9017 and 9019 shown in Figure 90. This type of notch mirror that has three high reflectivity bands is also known as a tristimulus notch mirror. Given that the FWHM bandwidths associated with each of the LEDs cover approximately 80% of the light energy emitted by the LEDs, a notch mirror that reflects 90% of the light over the same total bandwidth would reflect 90% X 80% = 72% of the light from the LED. At the same time, since the reflectivity of the notch mirror in the high transmission bands between the high reflectivity bands is approximately 5%, as shown in Figure 91, the notch mirror transmits approximately 95% of scene light of wavelengths between the high reflectivity bands along with 10% transmission of scene light within the high reflectivity bands. The total transmission of scene light can then be approximately calculated by bandwidth weighting of the transmission: [(10% X 131) + (95% X (680-420-131))]/(680- 420) = 52% of the total scene light in the visible range of 420 to 680nm is transmitted. As such the notch mirror with reflectivity shown in Figure 91 is simultaneously more efficient in both reflection of image light and transmission of scene light over a simple partial mirror that reflects 50% and transmits 50% of incident light across the entire visible range. However, light from the environment that originates from LED's in the environment would be blocked by the notch mirror as efficiently as the image light is reflected, which means that LED light would be transmitted at approximately (10% X 80%)+ (20% X 95%) = 27% within the see- through view. This level of transmission may not be sufficient to enable rapid observation of stop lights and other warning lights, which can be LED based, when the user of a head-worn display is operating a vehicle or other equipment.

[000398] To make LED light from the environment more visible to the user, the notch mirror can be modified to enable more light to be transmitted by the combiner. Figure 92 shows an illustration of a notch mirror designed with narrower 90% reflectivity bands separated by 95% transmission bands shown as 9215, 9217 and 9219 for blue, green and red respectively to provide a higher transmission of scene light from the environment. Where, the width of the high reflectivity bands is selected to be narrower than the FWHM of the light source associated with the image light. As a result the % of image light that is reflected by the combiner toward the user' s eye is reduced and the % of scene light transmitted by the combiner to the user's eye is increased. Due to the peaked shape of emission spectra of typical light sources, such as the example emission spectra of LEDs 9015, 9017 and 9019 shown in Figure 90, reducing the width of the high reflectivity bands, increases the % transmission of scene light faster than the % reflection of image light is reduced. As such, it is possible to design the notch mirror to reflect a majority (>50%) of the image light while simultaneously providing an even greater transmission of scene light. As shown in Figure 92, the widths of the high reflectivity bands 9215, 9217 and 9219 have been chosen to match the full width 70% max (the nm width of the emission curve taken at 70% of the peak relative intensity) of the LEDs, which corresponds to reflective bands of approximately 26nm for blue, 39nm for true green and 23 nm for red (88nm total), which provides a reflection of approximately 50% of the image light based on the area under the spectra curves. The scene light is then transmitted at ((10% X 88) + (95% X (680-420-88)))/(680-420) = 66% of the total scene light. At the same time, light from light sources such as LEDs in the environment will be transmitted at approximately 50% since the high reflectivity bands only block 50% of this light. This greatly improves the visibility of narrow band light sources such as LEDs that are in the environment by the user while preserving a reasonable level of efficiency of reflecting the image light to the user' s eye and providing a relatively high level of see- through transmission of scene light from the surrounding environment. Similarly, the wavelengths of the high reflectivity bands can be selected to be offset from a specific light source in the environment that is important for the user to be able to see easily.

[000399] In embodiments, there may be a problem with using narrow reflectivity bands in the notch mirror on the combiner, in that a portion of the image light is then transmitted through the combiner, so that image light can be seen by adjacent people in the form of a miniature projected image. This effect is known as eyeglow. Eyeglow can be detrimental in that it reduces privacy for the user because other people adjacent to the user can determine what the user is viewing in the head- worn display. Eyeglow can also be distracting, in that the user's eyes are not visible and instead the user has an other-worldly look with glowing eyes. As such, it is advantageous to be able to reduce eyeglow. This can be done by filtering the image light to provide image light with narrow emission bands, into the optics of the head-worn display, wherein the narrow emission bands of the image light match the narrow high reflectivity bands of the notch mirror. Figure 93 shows an illustration of a transmission spectrum for a notch filter with narrow transmission bands 9325, 9327 and 9329 respectively for blue, green and red light, where the narrow transmission bands 9325, 9327 and 9329 are matched to the narrow high reflectivity bands 9215, 9217 and 9219 of the combiner shown in Figure 92. The notch filter can be positioned anyplace along the optical path between the image source and the combiner so long as it does not interfere with the user' s see-through view of the surrounding environment, for example the notch filter can be associated with the image source. The notch filter can operate by absorbing or reflecting the non-transmitted portions of the emission spectrum provided by the image source. The notch filter can be a plate or film that is positioned adjacent to the image source and has a multilayer coating or a multilayer film that has the desired transmission spectrum to convert the image light from a broader band spectrum such as is shown in Figure 90 to a narrow band spectrum such as is shown in Figure 94. The narrow band spectrum of the image light is then reflected by the narrow reflectivity bands in the combiner (shown in Figure 92) with high efficiency (e.g. > 80% and preferably > 90%) and as a result, little of the image light (e.g. <20%) is transmitted by the combiner so that eyeglow is greatly reduced. [000400] In embodiments of the display optics of a head-worn display, the display optics include a reflective or emissive image source with an associated notch filter with narrow transmission bands spectrally aligned with the peak emissions of the image light from the image source to provide image light that has one or more narrow emission bands. The image light with narrow emission bands is then provided to display optics that include a combiner that has high reflectivity bands that are spectrally aligned in correspondence to the narrow emission bands of the image light and are spectrally wider than the narrow emission bands to reflect a majority of the image light toward the user's eyes for viewing a displayed image comprised of image light. The combiner simultaneously transmits a portion of scene light from the surrounding environment so the user views the displayed image overlaid onto a see-through view of the surrounding environment. In an example, the transmission bands of the notch filter are 15nm wide and transmit more than 80% of the incident image light within the transmission bands and transmit less than 10% of the image light between bands. The high reflectivity bands of the notch mirror are then 18nm wide and reflect greater than 80% of the incident image light within the reflection bands and reflect less than 10% while transmitting more than 80% of the image light between the reflection bands, while simultaneously transmitting greater than 60% of scene light (e.g. 80% between reflection bands and 10% in the reflection bands for visible light 420 to 670nm, [(670-420- (3*18))*80+((3*18)*10)]/[670-420] = 65%) including greater than 30% of LED light from the surrounding environment. In this way, LED lights in the environment, such as traffic lights or brake lights, can be readily seen by the user while eyeglow is prevented.

[000401] In embodiments, the notch mirror is applied as a layer to a combiner surface that is curved and positioned so that the user' s eye is on the concave side of the combiner and the curved combiner is thereby between the user's eye and the surrounding environment. This positioning enables the curved shape of the combiner surface to substantially improve the uniformity of the incident angle of both the image light and the see-through light onto the notch mirror layer across the respective fields of view. Given that the wavelengths associated with the high reflectivity bands of the notch mirror of the combiner will shift lower and increase in bandwidth in correspondence with the incident angle of both the image light and the see-through light it is advantageous to reduce the range of variation of the incident angle and thereby reduce color shifts in the image or the see-through view of the surrounding environment, as seen by the user. Figures 94a and 94b (taken from SemRock Optical Filters at non-normal angles of incidence) show how angles of incidence (AOI) and cone half angle (CFA) cause the performance of a bandpass filter to change. This type of change in a notch mirror would cause the reflected image light to become more bluish and the transmitted see- through view to become more reddish. In embodiments, this type of issue may be avoided by designing the optics to use a narrow cone of image light. In other embodiments, it is advantageous to design the optics and position the notch mirror on a curved surface to make the angle of incidence of the image light and see-through light more nearly normal to the surface of the notch mirror. Figure 95 a shows an illustration of display optics 9551 that includes a flat combiner 4950 with a notch mirror layer 9552. For simplicity, only rays of image light 4955, 4956 and 9557 are shown from the center of the eyebox. The angle of incidence of the rays of image light 9555, 9556 and 9557 onto the surface of the combiner 4950 vary considerably, from 40 degrees for 9556 to 60 degrees for 9557. In contrast, Figure 95b shows an illustration of display optics 9561 that includes a curved combiner 9560 with a notch mirror layer 9562 applied to the concave side of the curved combiner 9560. Again, for simplicity only rays of image light from the center of the eyebox are shown 9565, 9567 and 9568. As can be seen, the rays of image light 9565, 9567 and 9568 all have very similar angles of incidence relative to the surface of the curved combiner 9560. As such, the notch mirror layer 9562 shown in Figure 95b can provide a higher level of performance than the notch mirror layer 9552 shown in Figure 95a. Figure 95 shows a more detailed illustration of display optics 955 (similar to display optics 9561) for a head-worn display comprised of an image source 9525, one or more lenses 9523, a flat partially reflective beam splitter 9520 and a curved combiner 9510. Wherein the combiner 9510 includes a notch mirror layer 9512 that can be a multilayer coating, a coextruded film or a nanostructure that provides high reflectivity bands separated by bands of high transmission as has been described previously herein. The display optics 955 provide image light 9535 to an eyebox 9537 for viewing by a user' s eye while simultaneously providing the user with a see-through view of the surrounding environment. As can be seen in Figure 95, having a curved combiner 9510 reduces the variation of angle of incidence of the rays of the image light 9535 relative to the surface of the curved combiner 9510 and the associated notch mirror layer 9512 within the ray bundles that comprise the display field of view. Where variations on incident angle of the image light 9535 relative to the surface of the curved combiner 9510 can come from variations in the position of the eye in the eyebox and in the cone angle associated with the display field of view. As a result, the incident angle of the image light 9535 rays is substantially uniform at the notch mirror layer 9512. Similarly Figure 95 shows how the curved surface of the combiner 9510 and associated notch mirror layer 9512 at least partly compensates for changes in the angle of the rays of the see-through light 9530 (shown as dashed lines) at the notch mirror layer 9512. As such, the use of a curved combiner surface where the notch mirror layer is applied reduces color shifts across the display field of view and the see-through field of view, thereby enabling more compact designs of display optics with wider display field of view. In embodiments, the curved surface of the combiner 9510 where the notch mirror layer 9512 is provided, is a spherical curve with the radius of the sphere approximately equal to the distance between the notch mirror layer 9512 and the eyebox 9537 (e.g. the spherical radius is 75% to 120% of the distance), and the user's eye is positioned adjacent to the eyebox 9537, so that the incident angle of the see-through light at the notch mirror layer is essentially identical across the see-through field of view.

[000402] Figure 96 is an illustration of another example of display optics 965 wherein multiple optical surfaces are internal to a solid block 967 that is comprised of multiple transparent pieces that are cemented together. As shown in Figure 96, the curved surface of the combiner 9610 that is comprised of the notch mirror layer 9612 is internal to the solid block 967. The multiple transparent pieces can be of the same material, or different materials that have the same refractive index, so that only reflective surfaces (e.g. such as the beam splitter 9620 and the notch mirror layer 9612) affect the light passing through the solid block 967. Alternatively, one or more of the multiple transparent pieces located above the see- through region of the optics (such as field lens 9623) can be made from a different material with a different refractive index to provide a refractive effect on image light as the image light passes through. In Figure 96, the top piece is a field lens 9623 that provides a refractive effect to the image light 9635 because it has a different refractive index from the other pieces included in the solid block 967 and because the surfaces of the field lens 9623 are curved. The beam splitter layer 9620 can be a partial mirror coating or partial mirror film that is positioned between two prismatic elements (968 and 969) that have the same refractive index. The combiner 9610 also has the same refractive index as the two prismatic elements (968 and 969) so that image light 9635 and see-through light 9630 pass through the lower portion of the solid block 967 without being exposed to refractive effects and only being affected by the partially reflective surfaces present including the beam splitter 9620 and the curved surface that is comprised of the notch mirror layer 9612. The front and back surfaces of the solid block 967 are parallel so that the see-through view of the surrounding environment is not distorted. An advantage of using display optics 965 that include a solid block 967 is that the cone angle (also known as the included angle and sometimes referred to in terms of 1/2 the cone angle or CHA as previously described herein) included in the display field of view and the see-through field of view is reduced inside the solid block 967 due to refractive effects as the image light 9635 exits the solid block 967 and as the see-through light 9630 enters into the solid block 967. The cone angle reduction of the see-through light 9630 that occurs as the see-through light 9630 enters the solid block 967 at the front (right side as shown in Figure 96) of the solid block 967 can be seen in Figure 96. The cone angle of both the image light 9635 and the see-through light 9630 then increases due to refraction effects as the light exits the solid block 967 at the back (left side as shown in Figure 96) to provide the display field of view and the see-through field of view. This reduction in the cone angle of the image light and see-through light at the notch mirror layer 9612 reduces the variation in the angle of incidence of both the image light 9635 and the see-through light 9630 at the notch mirror layer 9612 which improves the uniformity of the performance of the notch mirror layer 9612 over the display field of view and the see-through field of view. Reducing the cone angle of the image light in the display optics at the notch mirror can be important to providing uniform color across the displayed image in wide field of view display optics such as when the display field of view is greater than 35 degrees or greater than 40 degrees. Reducing the cone angle of the see-through light at the notch mirror can also be important to providing uniform see- through color when the see-through field of view is above 40 degrees. As such, using a notch mirror layer on a curved surface of a combiner in display optics where the curved surface is internal to a solid block represents a preferred embodiment of the invention.

[000403] To make compact optics for head-worn computers, it is advantageous to use a wide cone of light from the image source. A wide cone of light from the image source is especially important if the optics are to provide the user with a wide field of view as the wide cone makes it easier for the optics to spread the ray bundles of the image light from the small image source to the larger area of the combiner when providing the wide angular field of image light that makes up the wide field of view. In this way, the optics for head worn computers are very different from a display such as a television where a viewer sees the display from a very limited cone of image light.

[000404] Figure 102a shows an illustration of a typical compact optical system 10260 with a folded optical path wherein light rays are shown passing through the optics from the emissive image source 10230 to the eyebox 10265 where the user can view the image, as shown in Figure 102a, image light is emitted by the image source 10230. The image light is then condensed by the lens 10275 so that a converging field of view is provided to the eyebox after being reflected by the beam splitter 10270. In this example, the angular size of the field of view is ultimately established by the size of the lens 10275 and the optical distance from the lens 10275 to the eyebox 10265. This can be seen by following the diverging rays 10264 from the eyebox 10265 to the beam splitter 10270 where they are folded by reflection from the beam splitter 10270 and then to the lens 10275. Where the angle between the outermost rays 10264 form the field of view associated with the displayed image. To make the optical system 10260 lower cost and light weight, it is advantageous to use a small image source 10230. To make the optical system 10260 compact, it is advantageous to use a folded optical path as shown in Figure 102a wherein the fold is provided by the beam splitter 10270, but other folded configurations are also possible. Another important factor that enables the optical system 10260 to be compact is to use a wide cone of image light from the image source 10230 which enables the image source 10230 to be positioned close to the lens 10275. Figures 102b and 102c illustrate how using a short focal length lens in an optical system enables a more compact overall length while also providing a wider field of view to the user's eye. Figure 102b shows a typical thin lens layout with a relatively long focal length and a relatively narrow field of view, wherein the image source 10285 is positioned at the focal length of the lens 10282 and the eye 10280 is positioned approximately at the same distance from the lens as the focal length. The aperture of the lens system is determined by the eyebox 10281. The ray bundles from any point on the image source 10285 provide a cone of light that as sampled by the lens 10282 will cover the area of the eyebox 10281. With the relatively long focal length lens 10282, the chief rays 10286 and 10287 at the center of each ray bundle are shown as essentially parallel and as a result, the chief rays 10286 and 10287 sampled by the lens 10282 all have a chief ray angle (the angle between the chief ray and the surface normal of the image source) of nearly zero. In contrast, Figure 102c shows a thin lens layout with a reduced length and a wider field of view. This is provided by using a lens 10290 with a shorter focal length. The image source 10285 is again positioned at the focal length of the lens 10290 to provide a sharp image. However in this case, the chief rays 10292 and 10291 are substantially diverging in order to provide the increased field of view to the eyebox 10281 and the user's eye 10280. Where the field of view is the subtended angle between the rays provided to the eyebox 10281. As such, for a given size of image source 10285, optical systems that provide a wide field of view will be associated with larger chief ray angles as sampled by the lens 10290 to provide the image to the user's eye 10280.

[000405] In a display system for a head-worn computer such as the optical system 10260 shown in Figure 102a, the lens 10275 samples the image light provided by the image source 10230 such that the chief ray angles associated with the ray bundles of image light that is used to form the image seen by the user, will vary with the radial distance from the center of the image source 10230. Consequently, the chief ray angle is typically zero at the center of the image source 10230 and the chief ray angle increases out to the corner of the image source 10230 where it reaches its greatest value. For an optical system 10260 that provides a field of view of 30 degrees or greater, the chief ray angle can be 25 degrees or greater. For an optical system 10260 that provides a field of view of 50 degrees or greater, the chief ray angle can be 40 degrees or greater. Where the chief ray is the center of a cone of light rays for each pixel in the image and the subtended angle of the cone of light rays in the ray bundle is determined by the f# of the optical system 10260. As such the angular distribution of the image light that provides the image to the user at the eyebox 10265 is determined by the chief ray angles and the f# of the optical system 10260. To provide uniform brightness and color to the user over the entire image, the image source 10230 must be capable of providing uniform brightness and color for all of the pixels in the image regardless of the chief ray angle associated with the pixel.

[000406] Figure 97 is an illustration of a cross section of an emissive image source 10230 such as an OLED as it is typically provided. Wherein the image source 10230 is comprised of pixels 9705 where each pixel 9705 includes a set of subpixels 9700. For simplicity in Figure 97 and others (Figures 104, 105, 106, and 107) Pixel 1 is presented as the center pixel on the image source 10230 and Pixel 5 is positioned near the edge of the image source 10230. Each set of subpixels 9700 provide the color set associated with each pixel 9705, such as red, green and blue, or cyan, magenta, yellow but other configurations of sets of subpixels 9700 are possible such as including a white subpixel with each pixel 9705. While the subpixels 9700 can be made to directly emit different colors, in many cases, it is advantageous for manufacturers of OLED image sources to provide subpixels 9700 comprised of white emitting subpixels 9700 with an associated color filter array 9720 to convert the emitted white light from each subpixel 9700 to the appropriate color for the subpixel 9700. Where the color filter array 9720 can be separated from the white emitting subpixels 9700 by a transparent layer 9710 that is provided for a variety of reasons such as to provide a moisture barrier over the pixels 9705. The color filter array 9720 can also be protected by a cover glass (not shown) that is positioned directly over the color filter array 9720. Many of the OLED microdisplays available at this time are made in this way with white emitting subpixels 9700, a transparent layer 9710 and a color filter array 9720 with a cover glass. This alignment of the color filters 9720 directly over associated subpixels 9700 provides good color rendition across the image when viewed from a position directly above the image source 10230 where the viewing angle is relatively uniform such as with the optical system shown in Figure 6b. However, when viewed from an angle so that a chief ray angle of greater than approximately 20 degrees such in the optical system shown in Figure 102c, the color rendition changes and a shift in the color of the image is observed. The reasons for this color shift will be explained in more detail below.

[000407] Figures 98, 99 and 100 show illustrations of examples of common layouts for the color filters associated with subpixels on image sources. Figure 98 shows a color filter layout 9810 wherein the colors repeat in rows and the rows are offset from one another by one subpixel. Figure 99 shows a color filter layout 9910 wherein the colors repeat in rows. Figure 100 shows a color filter layout 9910 wherein the colors repeat in rows and each row is offset from neighboring rows by 1 ½ subpixels. As shown in Figure 98, a pixel 9815 is comprised of three subpixels 9700 with red, green and blue color filters arranged in a linear pattern. While the pixel 9815 is shown to be rectangular with square subpixels 9700 for simplicity, pixels 9815 are typically actually square with rectangular subpixels 9700.

Similarly, Figure 99 shows pixels 9915 comprised of subpixels 9700 with red-green and blue color filter linearly arranged. Figure 100 shows a different layout wherein a pixel 9915 is comprised of subpixels 9700 that include red, green and blue color filters. However, in this case, the subpixels 9700 and color filters are arranged in a triangle to make the pixel appear as more of a round spot in the image.

[000408] Figure 101 shows an illustration of rays 10125 of image light as emitted by a single subpixel 9700 in a pixel 9705. The subpixel 9700 emits white light with an angular cone subtended by the rays 10125. The rays 10125 then pass through the transparent layer 9710 and the color filters 9720. However the angular cone subtended by the rays 10125 is large enough that the rays pass through not only the color filter 9720 associated with the particular subpixel 9700, but also the adjacent color filters 9720 that are associated with adjacent subpixels 9700. Since as shown in Figures 98, 99 and 100 the adjacent subpixels may have color filters of different colors, the rays 10125 will have different colors depending on which color filter they have passed through. As such, the color produced by a subpixel 9700 varies depending on the angle that it is viewed from above the image source 10230. This effect is responsible for causing a color shift in images displayed in head-worn computers that becomes more noticeable as the chief ray angle increases such as near the edges or sides of the displayed image.

[000409] Figure 102 is an illustration of how the ray angles of the image light sampled by a lens in forming an image for display in a typical compact head-worn computer vary across an image source 10230. Image light rays 10240 as sampled by the lens 10235 to form an image for display to a user have a chief ray angle that varies across the image source 10230. In the center of the image source 10230, the ray 10245 has a zero chief ray angle. In contrast, the ray 10250 at an edge of the image source 10230 has a chief ray angle that is approximately 45 degrees as shown. Figure 103 is an illustration of the chief ray angles sampled by the lens 10235 over the surface of the image source 10230. This illustration shows how the chief ray angle varies based on the radial distance from the center of the image source 10230. Ray 10250 has the largest chief ray angle because the associated pixel 9705 is located adjacent to the corner of the image source 10230 thereby positioning the pixel 9705 radially furthest from the center of the image source 10230. As such, when the chief ray angle for the rays 10240 is considered in combination with the effect described with Figure 101 and the thickness of the transparent layer 9710, ray 10245 would be of the intended color and ray 10250 would be of another color that came from an adjacent color filter 9720. These color differences will be visible in the image that is displayed to the user

[000410] Figure 104 is an illustration of a cross section of a portion of an image source 10230 wherein Pixel 1 is a center pixel and Pixel 5 is an edge pixel. Rays 10125 are shown emitted as a cone of rays (only half of the cone of rays emitted by each subpixel is shown to simplify the figure) for one subpixel 9700 in each pixel 9705. While each subpixel 9700 emits the same cone of rays 10125, the lens 10235 only samples a small portion of the rays 10125 emitted by each subpixel 9700. Where the sampled portion of the rays 10125 is different for each subpixel 9700 depending on the chief ray angle associated with the pixel 9705 and the radial position of the pixel 9705 relative to the center of the image source 10230. As a result, while each subpixel 9700 shown with emitted rays 10125 in Figure 104 can be thought of as a red subpixel because a red color filter is positioned directly over each subpixel 9700, the colors of the sampled rays (shown as dark lines in Figure 104) will progressively shift from red (ray 10445) to green (ray 10450). Consequently methods are needed to compensate for the color shift encountered at the edges and corners of images displayed in head- worn computers when the optical systems use high chief ray angles.

[000411] Figure 105 shows a modified color filter array 10520 wherein the color filter array 10520 is somewhat larger than the array of subpixels 9700. As a result, the position of the color filters over the subpixels 9700 is progressively outwardly offset for subpixels that are positioned further away from the center of the image source. Figure 106 shows the effect of the progressively offset color filter array 10620. Each of the subpixels 9700 emits the same cone of rays 10125 as shown in Figure 104, but now the rays that are sampled by the lens 10645 (shown as dark lines) all pass through the red color filter in the color filter array 10620 so that each of the subpixels 9700 shown in the same relative position within the pixels 9705 produce the same red color in the image displayed to the user in the optical system 10260. As such, the progressively offset color filter array 10620 effectively compensates for the increasing chief ray angle that enables compact optical systems with a wide field of view. Where the progressive offset of the color filter array can be radially based, linearly based with a progressive X direction shift or rectilinearly based with a progressive X direction and Y direction shift. Where a radial shift or a rectinlinear shift are well suited for a symmetric arrangement of the subpixels and color filters such as is shown in Figure 100. A linearly based shift is well suited for a more rectangular arrangement of subpixels and color filters such as shown in Figure 99 or a version of Figure 98 wherein the pixels are square and the subpixels and color filters are rectangular. This embodiment can be implemented by changing the color filter array pattern that is applied to the image source.

[000412] Figure 107 shows an illustration of an optical solution wherein the rays from each subpixel 9700 are repointed so that zero angle rays (rays that are emitted perpendicular to the surface of the image source) become rays with the chief ray angle matched to the sampling of the lens. As shown in Figure 107, zero angle rays from all of the subpixels are repointed by an optical film 10760 thereby forming rays with progressively greater chief ray angles in correspondence to what the lens 10235 samples to form the image for the user. As shown in 107, the optical film 10760 is a diffractive lens or a Fresnel lens that progressively refracts the zero angle ray provided by subpixels 9700 and pixels 9705 so that subpixels 9700 and pixels 9705 that are positioned farther from the center of the image source 10230 are refracted more to give them a greater chief ray angle. The optical film 10760 can be attached to the upper surface of the image source 10230 (or attached to a cover glass over the color filter array) to make a compensated image source module or the optical film 10760 can be retained separately. This embodiment provides a further advantage in that the zero angle rays, which are emitted perpendicular to the surface of the subpixel 9700, include the most intense image light so that the image provided to the user will be brighter.

[000413] In alternative embodiments, the optical film 10760 can include microlenses to repoint the zero angle. The microlenses can be provided as a microlens array in an optical film or alternatively the microlenses can be applied directly to a cover glass over the color filter array. Microlenses provide a further advantage in that the cone of light emitted by the subpixel can be condensed to utilize more of the light emitted by the subpixel and thereby improve energy efficiency.

[000414] In a further embodiment, the color shift caused by the chief ray angle of the rays sampled by the lens 10235 and the thickness of the transparent layer 9710, is compensated in the digital image by changing the digital code values presented to the pixels in the digital image. In this case, the layout of the color filter array 9720 is also taken into account so that a digital shift equation is applied to the digital image prior to being displayed on the head-worn display. Where the digital shift equation includes the position of the subpixel relative to the center of the image source, the chief ray angle for rays sampled by the lens at that position, the thickness of the transparent layer as well as the relative position of the color filters adjacent to and surrounding the subpixel. Figure 108 shows an illustration of an array of subpixels on an image source, where 10880 is the center point of the image source and 10882 is a subpixel in the array of subpixels. The arrow shown shows the distance from the center 10880 to the subpixel 10882. The digital shift equation thereby determines which pixels will have a color shift caused by the emitted light exiting through an adjacent color filter and then determining how the code values associated with the pixel need to be shifted between subpixels within the digital image to provide a modified digital image that when viewed by the user in the head- worn computer will have the colors intended to be included in the digital image. The distance of the pixel 10882 from the center of the image source 10880 and the lens characteristics (e.g. focal length) determines the chief ray angle for the pixel which along with the thickness of the transparent layer 9710 determines whether the sampled ray from the subpixel 10882 will exit through the intended color filter as shown in Figure 104 as ray 10445, or whether the sampled ray will exit through an adjacent color filter as shown by ray 10450. To compensate for the sampled rays exiting through adjacent color filters, the code values for the pixel in the digital image are shifted in the opposite direction (e.g. toward the center of the image) to an adjacent subpixel. Wherein each code value associated with a pixel determines how brightly each subpixel in the set of subpixels will emit white light, and consequently how bright each color associated with the pixel in the image will be. As such in this method, the relationship between the subpixels and the colors produced by the subpixels in the displayed image is changed to take into account the effect of the lens and the distribution of ray angles used by the lens to provide the displayed image within the head- worn computer. When shifting the code values to an adjacent subpixel, the adjacent subpixel may be in the same pixel or in an adjacent pixel. Equation 1 is an example of a digital shift equation for a subpixel, wherein: Ps is the number of subpixels that the code value is to be laterally shifted by; d is the distance from the center of the image source to the position of the subpixel; t is the thickness of the transparent layer; fL is the focal length of the lens; C is a function of the color filter array layout surrounding the subpixel; and f(a) is a function of the angle of the chief ray angle relative to the color filter array layout. Equation 1 is shown as an example of a digital shift equation but other equations are possible.

[000415] Ps = (dXt/fL)(CXf(a)) Equation 1

[000416] For example as shown in Figure 104, the code values for pixels 1 and 2 will not be shifted because the sampled rays from each subpixel (samples rays are shown as dark lines) exit through their intended color filters as shown by ray 10445 exiting through a red color filter. In contrast, the code values for pixels 4 and 5 will be shifted by one subpixel because the sampled rays exit through the adjacent color filter as shown by ray 10450 which exits through a green color filter. As such the code values for all the subpixels in pixels 4 and 5 will be shifted to the left by one subpixel so that the emitted light will exit through the intended color filter. For cases such as shown by ray 10452 where the sampled ray from a subpixel in pixel 3 exits partially through a red color filter and partially through a green color filter, the code values for the pixel 3 subpixels can be shared between subpixels, for example by averaging the code values between the two subpixels and thereby providing a ½ subpixel shift. Alternatively, code value shifts can be limited to whole subpixels and the code value shift is only applied when the majority of the sampled ray associated with the subpixel will exit through the adjacent color filter.

[000417] Looking at the color filter array patterns shown in Figures 98, 99 and 100, code value shifts between subpixels will vary depending on the color filter array pattern. For the color filter array pattern shown in Figure 99, code values shifts between subpixels are only provided to reduce color shifts in the horizontal direction. Since the color filter array includes vertical stripes of the same color, increasing chief ray angle will only cause a color shift in the horizontal direction and not in the vertical direction. However, for the color filter array patterns shown in Figures 98 and 100, code shifting between subpixels will be needed in both X and Y directions as the chief ray angle increases toward the corners of the image. For example for the color filter array shown in Figure 98, for a pixel positioned horizontally from the center of the image source with a chief ray angle that causes the emitted light from a subpixel to go into the adjacent color filter, red code values will need to be shifted left into the subpixel under the blur color filter. Similarly the green code values will need to be shifted left into the subpixel under the red color filter and likewise, the blue code value will need to be shifted left to the subpixel under the green color filter. For a pixel located in the top right corner of the image source code values will need to be shifted to the left and down to make the light rays emitted by the subpixel and sampled by the lens to exit through the intended color filter.

[000418] It should be noted that all of the methods described including color filter shifts, ray repointing and digital pixel shifts will produce an image that when viewed from a position directly above the image source such as when viewed by eye, will actually produce an image that has poor color rendition. This is because when viewed in this manner the chief ray angles will all be close to zero degrees. It is only when a lens with a relatively short focal length so that chief rays with substantial chief ray angles are sampled by the lens when viewing the image, that the color rendition will be improved by these methods. As such, the method would not be useful on a television type display or in a display system that uses telecentric image light. The color shift that is the topic of this invention is only important when compact optics are included with a short focal length lens relative to the size of the image source such as is used to provide a head- worn computer with a wide display field of view with compact optical systems.

[000419] Users of smartphones often complain of neck pain caused by the repetitive stress of looking downward at the smartphone while texting, emailing or internet surfing. An illustration of a user 10910 using a smartphone 10915 wherein the user is looking at the smartphone with an extreme downward line of sight 10930 is shown in Figure 109. The neck pain condition has been called a variety of descriptive names including "text neck, iPhone neck, smartphone neck, etc. While the issue can be reduced if the user holds the smartphone directly in front, this is typically not comfortable for the user's arms. The problem is that the user is limited to being either having an uncomfortable neck when looking at the smartphone or having uncomfortable arms holding the smartphone. A head- worn computer (HWC) is not limited in these ways because the HWC is mounted on the user's head. In addition the user interface for an HWC can be held separately in the user's hands (e.g. a separate keyboard). As a result a HWC can be used in more ergonomic positions than a smartphone.

[000420] The disclosure provides a method of operation of a HWC that promotes an ergonomic positioning of the user's head when viewing images on the HWC. To avoid interfering with augmented reality uses when the user is moving through an environment, the user' s activities are monitored to determine the type of activity that the user is engaged in. When the user is determined to be engaged in an activity that includes limited motion where an ergonomic position would be beneficial, the position of the image in the HWC is modified so that the image content is only fully viewable when the user' s head is at an ergonomically advantaged position. [000421] Neck pain caused by the prolonged use of a smartphone 10915 is connected with the user looking downwards thereby tilting their head downwards, as shown in Figure 109, for extended periods of time. Wherein, the user's line of sight 10930 is approximately 45 degrees below horizontal or greater. In contrast, when using an HWC, the image presented by the HWC can be viewed at any angle the user chooses to position their head. Figure 110 is an illustration of a user 10910 looking at an image in an HWC 11020, wherein the user's head is positioned neutrally so the user 10910 looks straight forward or slightly downward when viewing images. Figure 111 shows the relative position of the user's line of sight 11137 to the center of the virtual image which is preferably approximately 15 degrees below horizontal. The disclosure provides a method of operation that limits viewing of images with full image content to head angles that are ergonomic.

[000422] HWCs are used in a variety of use cases including use cases where the user moves about in a surrounding environment, such as by walking or running in which case head position is determined at least in part by the surrounding environment. In this case, neck pain is not as much of an issue and limiting the viewing of images to ergonomic head positions would actually interfere with functional aspects of the use case. Similarly, in certain types of augmented reality use cases such as gaming where the user may not be moving in the environment, but the use case calls for the user to move their head around a lot, neck pain is not as much of an issue and limiting the view of image to ergonomic head angles would interfere with functional aspects of the use case. However, in those use cases where the user keeps their head relatively still while viewing images for extended periods of time such as when watching a movie or while reading email, limiting the viewing of images to head angles that are ergonomic provides a significant advantage in user comfort.

[000423] To be able to differentiate between use cases wherein ergonomic head position is an advantage and use cases where an ergonomic head position would interfere with functional aspects of the use case, the HWC needs to be able to identify what the user is doing. In embodiments this determination can be done by a combination of sensors to determine how the user is moving. In embodiments, a marker can also be associated with the use case that identifies the use case to be one that needs the image to be locked in position within the display field of view of the HWC or one that would provide improved user comfort if the viewing was limited to ergonomic head positions.

[000424] Determining what the user is doing relative to whether ergonomic head positioning should be provided or not, involves determining whether the user is moving about in the environment and whether the user is substantially moving their head. The HWC can include various sensors that can be used to determine what the user is doing. The sensors may include an inertial measurement unit (IMU), which can include gyroscopes and accelerometers to measure movement and tilt, a magnetometer to measure compass heading, a global navigation satellite sensor (GNSS or GPS) to measure position on the earth and an altimeter to measure altitude. Polling of the IMU and GPS can be used to determine whether the user is moving about in the environment and at what speed the user is moving. Slow speed (e.g. below 20 miles per hour) as indicated by the GPS along with cyclic up and down measurements from the IMU can be used to determine whether the user is walking or running. Measurements from the GPS and altimeter can be used to determine whether the user is in a vehicle. High speed movement (e.g. above 20 miles per hour) at a low altitude can indicate the user is in a vehicle such as a car. High speed movement wherein the compass heading is largely the same as the GPS movement aside from some periodic differences to the left and right, can indicate that the user is driving a vehicle. High speed movement wherein the compass heading is different from the GPS movement can indicate that the user is a passenger in a vehicle and if the tilt is downward the user can be indicated to be reading. Very high speed (e.g. above 150 miles per hour) movement at a high altitude can indicate that the user is flying in a plane. Rapid head movements back and forth can indicate that the user is looking for something. If the head movements correspond to changes in the displayed image, the user can be determined to be reacting to augmented reality imagery. Thus movement signatures corresponding to combinations of different types of measurement patterns from sensors in the HWC can be used to determine what the user is doing.

[000425] In cases where the user is determined to be stationary or to be a passenger in a vehicle or a plane, the HWC can automatically switch to a mode of operation in which the image content is only fully viewable when the HWC is oriented within a range of degrees from horizontal (e.g. -10 to -20 degrees from horizontal) that is considered ergonomically advantaged. Wherein the HWC is preferably setup to follow the ergonomic

recommendations for a workspace that includes a computer monitor. In the ergonomic recommendations for computer workstations the upper edge of the monitor should be positioned to be horizontally opposite to the user' s eye and the middle of the monitor should be viewable with the user's eye at 15 degrees below horizontal. Figure 111 is an illustration showing a user 10910 with an HWC 11020 positioned to provide a line of sight 11137 to the center of a virtual image 11140 that is approximately 15 degrees below horizontal and the line of sight to the top of the virtual image 11135 is approximately horizontal. Figure 112 shows an example of a virtual image 11140 as seen by the user 10910 when the user's head is positioned as shown in Figures 110 and 111 where the user is presented with a full view of the virtual image 11140 including all the image content.

[000426] In contrast, Figure 113 shows an example in which the user is looking downward at approximately 30 degrees. In this case, the disclosure provides a modified virtual image 11440 comprised of less than half of the image content included in virtual image 11140 and the half of the image content is presented in the upper portion of the modified virtual image 11440 as shown in Figure 114. In this way the user is encouraged to view images with his head held at an ergonomic angle where the user can view virtual image 11140 that includes all of the image content. Larger display fields of view may increase the difference between the angle of the line of sight to the top of the virtual image and the angle of the line of sight to the center of the virtual image.

[000427] In a more general case, a portion of the image content 11445 included in the virtual image 11140 is presented in the modified virtual image 11440 in correspondence to the deviation between a target ergonomic angle for the line of sight to the center of the image 11137 and the actual angle for the line of sight to the center of the image. The larger the deviation, the smaller the portion of the image content 11445 that is presented within the modified virtual image 11440. By moving the portion of the image content 11445 upwards in the modified virtual image 11440 as the user moves his head downwards, the image content appears to be locked in place vertically relative to the environment. At the same time, in the method of the disclosure the image content does not move laterally within the modified virtual image 11440 if the user moves his head laterally, because this type of movement does not affect ergonomics when using an HWC 10910. As such the method provided by the disclosure differs from world locking of a virtual image to objects in the environment, because world locking typically includes locking of the virtual image both vertically and laterally relative to objects in the surrounding environment in response to movements of the user's head. Since the user of an HWC 10910 can choose the positioning of his head laterally, he will tend to choose a comfortable and ergonomic lateral position for his head, which is approximately looking straight ahead. This also allows the user of a HWC 10910 that provides a see-through view of the surrounding environment to freely move his head laterally in response to changes in the surrounding environment while being encouraged to hold his head in an ergonomic position vertically.

[000428] In embodiments the target angle for the line of sight 11137 to the center of the virtual image 11140 is in the range of 10 to 20 degrees below horizontal. In addition, the virtual image 11140 is presented by the HWC 11020 so that the line of sight 11137 is perpendicular to the line of sight 11137 so that the angle of the line of sight to the top and bottom of the virtual image 11140 is substantially the same thereby making it easier for the user 10910 to view the entire image. This is different from a computer workstation wherein the display is typically oriented vertically to reduce the space required by the workstation. Instead the HWC 11020 provides a virtual image 11140 that is presented more like a laptop wherein the display is tilted to orient the display perpendicular to the user's line of sight.

[000429] The angle of the line of sight 11137 to the center of the virtual image can be determined from the geometry of the optics of the HWC 10910 (how the virtual image is presented relative to the frame of the HWC 10910) and a measurement of the angle of the HWC 10910 provided by the tilt of the IMU. The modified virtual image 11440 can be constructed by cropping the image content and shifting the cropped digital image content 11445 vertically within the image content of the modified virtual image 11440, and then adding solid digital image content 11447 in the modified virtual image 11440 to form the modified virtual image 11440. The solid digital content 11445 is comprised of a solid color such as for example black or white as best matches the current use case of the HWC 10910. Typically, for an HWC 10910 that provides a see-through view of the surrounding environment, the solid digital image content 11447 would be black as shown in Figure 1140 to provide the user with a clear view of the surrounding environment through the portion of the modified virtual image 11440 that constitutes the solid digital content 11447. Where the black portions of the modified virtual image 11440 provide no light to the user's eye and as such, the user sees a clear see-through view of the surrounding environment.

[000430] In embodiments, if the HWC 10910 is determined to be at an extreme angle relative to horizontal (e.g. greater than 30 degrees from horizontal) as measured by the IMU in either a vertical or sideways angle the user is determined to be lying down. If the user is lying down, the user's head is typically supported by a pillow or other padding so that head position doesn't change strain the neck muscles and neck pain is not an issue. Consequently, the virtual image 11140 are presented with full image content (no cropping or moving) to the user.

[000431] In embodiments, a camera is included in the HWC 11020 that can capture an image of a portion of the user's body, such as the user's chest, to determine the angle of the HWC 11020 relative to the user's body. The method of presenting the virtual image to encourage an ergonomic position of the user' s head while viewing images for an extended period of time is then changed to be in correspondence to the measured angle of the HWC 11020 relative to the user's body. This embodiment can provide a more accurate measure of the angle of the user's head position relative to the user's body in cases when the user's body is angled such as when the user is sitting in a chair and leaning back or when the user is leaning forward or backward.

[000432] When using head-worn computers for viewing different types of content, such as movies, it is sometimes desirable to share the experience associated with the content between multiple users. Typically the sharing occurs between multiple users that are adjacent to one another so the emotions and reactions caused by the content can be shared, similar to the shared experience provided by watching television as a group. In this case, the sharing experience provided by the head-worn computers is improved if the content is presented simultaneously to the multiple users in the group so they experience the content at the same time. However, synchronized presentation of content does not necessarily occur between multiple head-worn computers. As a result, a system is needed to provide the same content to the multiple users and then to control the timing of the start and running of the content to ensure that the content is displayed to the multiple users continuously in a synchronized manner.

[000433] The disclosure provides a method for making content available to multiple users and then to simultaneously start the content for the multiple users. Wherein the users are all part of a defined group and they are at least initially connected to the same content access point. The access point can be the cloud, one of the head-worn computers in the group, etc.

[000434] A method is provided for identifying the users that are to be included in the group of multiple users that will be provided with synchronized content, based on selections by each user of the same content and an indication that the user wants to receive the content in a synchronized manner with the other users in the group. The access point then initiates a simultaneous start of the content within the head-worn computers for all of the users included in the group.

[000435] In embodiments, in the event that the status of the users changes so that they can no longer connect to the access point, one of the head-worn computers in the group may become the host. The other head-worn computers in the group then become clients. The control of the synchronized content is then provided by the host so that the content continues to be provided to the multiple users in a synchronized manner. The steps included in the method can be controlled by an app that is downloaded to the multiple head-worn computers.

[000436] Head-worn computers can be used to access and display digital content provided by the cloud through available access points, where content can include for example movies, games or interactive experiences with imagery and haptic stimulation. In some cases, where multiple users with head-worn computers are located adjacent to one another (e.g. in the same room) it can be desirable to share the experience and reactions caused by the content between the multiple users. The sharing experience is improved when the content is provided to the multiple users simultaneously. However head-worn computers are typically setup to provide content to individual users and are not setup to provide simultaneous presentation of content to multiple users. Therefore, a system is needed to communicate between the multiple head- worn computers in such a way that the same content can be provided simultaneously in a synchronized fashion.

[000437] To enable simultaneous presentation of content to multiple users in a group, the head- worn computers need to be connected to the internet with a connection that enables the content to be downloaded with sufficient bandwidth to enable content to be accessed through a combination of streaming and buffering. The head- worn computers also require a display system to display the content to the user. Where the display systems associated with the head-worn computers do not have to be all the same for the various users in the group and the display systems can include a view of the surrounding environment or not (e.g. the display systems can provide augmented reality images or virtual reality images). Audio systems are also required if the content includes an audio track. A GPS system in the head- worn computer may be needed to verify the location of each of the users in the group.

Electronic compasses in the head-worn computer can be used to determine the direction that the user is looking. Wifi connections between the head-worn computers in the group can enable the head-worn computers in the group to communicate with each other.

[000438] Figure 115 is an illustration of multiple head-worn computers linked to the cloud or a host. Where the multiple head-worn computers 11510, 11520 and 11530 can be different types of head- worn computers. The multiple head-worn computers 11510, 11520 and 11530 are linked through wifi or the internet to an access point 11550 which can be the cloud, another computer or a host head- worn computer.

[000439] A central feature for a system that allows multiple users to simultaneously share content is identifying which users are to be included in the group. In embodiments, to join the group, each user selects the same content from an access point on the cloud. Each of the users also indicates that they would like to be included in a synchronized group presentation of the content. To enhance security, the access point on the cloud can verify that each of the users requesting to be included in the group is located at the same, or very similar (e.g. such as within 5 meters), GPS location prior to downloading the content to the users or prior to initiating a start of the content presentation. In embodiments, the group members may be separated by greater distance and not even within view of one another. For example, the head-worn computers may be adapted with a separate audio channel to allow each of the members to talk to one another and hear each other throughout the experience even if the participants are greatly separated.

[000440] To further enhance security, the potential members can be verified to have looked at another member of the group for a period of time prior to being added to the group in a linking process. This is accomplished by the cloud verifying that all the members of the group have at some time during a predetermined period of time, shared the same GPS location and opposite compass headings, thereby indicating that two members of the group are adjacent and looking at one another. A linking period can be defined as well, wherein the members need to look at each other during a defined period of time and for a selected period of time. For example the linking process can be as follows, the users wishing to join the group can be required to look at another member of the group for a period of 5 seconds within a defined period of 5 minutes before they will be admitted to the group. The linking process can further be defined such that one member of the group is identified as the link coordinator and each member of the group must look at the link coordinator for 5 seconds before they will be admitted to the group, with the process being repeated for each member of the group. During the linking process, the access point or the head-worn computer associated with the linking coordinator polls the GPS locations and compass headings of the head-worn computers associated with the users that have indicated they would like to be included in the synchronized presentation of the content to determine whether the multiple users have looked at another member or the linking coordinator for the selected period of time.

[000441] In embodiments, the content is downloaded to one of the head- worn computers and that head- worn computer then acts as a host for the other head- worn computers associated with the members of the group. The process for identifying the users to be included in the group can then be the same, but the steps are now performed by the host head-worn computer. In the event that simultaneous sharing is initially setup to be run by an access point in the cloud and the linkage to the cloud is lost, such as could occur if the group is on a train, one of the members is identified as the host and the others as clients for the remainder of the simultaneous sharing session. The switch between cloud based control and host based control can occur automatically or the content sharing can be interrupted to allow one of the members of the group to volunteer to be the host. In the case where the switch between cloud based and host based control occurs automatically, the choice of which user to designate as the host can be done by comparing of hardware capabilities on the head- worn computers (e.g. which head- worn computer has the most Wifi bandwidth) or by identifying which head- worn computer has the highest % of buffering of the content.

[000442] Once the members of the group have been identified and linked to the group, the presentation of the content can be started immediately if the content is to be streamed or delayed if the content is to be buffered. To reduce the chance for interruptions during the presentation of the content, the start can be delayed until a selected % of the content has been downloaded (e.g. 30% of the content is downloaded). Where the selected % can be automatically chosen in correspondence to the bandwidth that is available for downloading the content. The start can be delayed until the selected % of download has occurred to all the members or to the host as applicable. Alternatively the start of the content can be delayed until a start request has been received from one of the members of the group. In any case, a start signal is simultaneously transmitted to all the members of the group by the access point or host so the start of the content is synchronized.

[000443] After the content has been downloaded or buffered to the selected %, the linking of the members has been accomplished and the start has been initiated, periodic synchronization checks are performed to insure that the content remains synchronized. The synchronization check can be done by providing timestamps associated with the content. The cloud or host can then poll the multiple head- worn computers to determine whether the timestamps occur at the same time thereby indicating a synchronized presentation of the content to the members of the group. If the content is found to not be synchronized, portions of the content can be repeated or removed within individual head-worn computers to improve the synchronization. For example, when sharing a video, if the video is found to not be synchronized between the members, individual frames in the video can be repeated or removed to improve the synchronization without significantly detracting from the viewing experience.

[000444] In embodiments, the system for simultaneous sharing of content can be included in an app that is loaded onto the head-worn displays. The app can communicate with the cloud and the other head- worn computers associated with the members of the group to download the content, establish the group and initiate the start of the content. The app can also control any switches from cloud based control to host based control.

[000445] In embodiments, the location of the users to be included in the group is verified to be the same based on the IP address associated with the head- worn displays. [000446] In embodiments, if the head-worn computers associated with the multiple users cannot access the cloud, synchronized sharing of content can still be accomplished wherein one of the head-worn computers that has the content is identified as the host. The host then identifies the members of the group, establishes a link (e.g. a wifi link) with the members of the group and the members download the content from the host. Where the host can determine whether other head- worn computers are physically adjacent by whether they can be linked to through wifi, or whether they have the same IP address or the same GPS address. A more secure determination of whether a head-worn computer should be included in the group can include the step of determining whether the head-worn computer has the same GPS address and has had an opposite compass heading to the host within a

predetermined period of time thereby indicating that the client and the host have looked at one another. When the selected % of buffering of the content by all the members of the group has been accomplished, the host initiates the start of the content. As the content is being shared by the members of the group, the host conducts periodic synchronization checks as previously described herein.

[000447] Additional Statements of the Disclosure

[000448] In some implementations, frames mechanically supporting head-worn computers may be described in the following clauses or otherwise described herein and as illustrated in Fig. 57.

[000449] CLAUSE SET A

[000450] Clause 1. A head- worn computer, comprising: a frame mechanically adapted to position a see-through optical computer display in front of a user's eye, wherein the frame includes an arm adapted to hold the frame to the user's head; a rotary dial mounted on the arm and accessible to the user; a direction selection device mounted on the arm proximate the rotary dial; and the rotary dial adapted to move a graphical selection element either laterally or vertically within a graphical user interface, wherein the movement direction is based on a selection setting of the selection device.

[000451] Clause 2. The head- worn computer of clause 1, wherein the rotary dial includes a plurality of mechanical stops adapted to cause the rotary dial to stop turning in increments to correspondingly cause the graphical selection element to pause on a next selectable object in the graphical user interface.

[000452] Clause 3. The head-worn computer of clause 1, wherein the direction selection device causes the graphical selection element to vertically when activated a first time and horizontally when activated a second time. [000453] Clause 4. The head-worn computer of clause 1, wherein the graphical user interface includes a plurality of selectable elements, wherein the graphical selection element snaps to a next selectable element in the plurality of selectable elements, and wherein the direction of the next selectable element is based on a selection setting of the direction selection device.

[000454] Clause 5. The head-worn computer of clause 1, wherein the graphical user interface includes a continuously scrollable environment, wherein the direction of the scroll in the graphical user interface is dependent on a selection setting of the direction selection device.

[000455] Clause 6. The head-worn computer of clause 1, further comprising a haptic feedback system adapted to provide a level of haptic feedback corresponding to a turn of the rotary dial.

[000456] Clause 7. The head-worn computer of clause 6, wherein the haptic feedback is provided by a system that includes a plurality of haptic strips.

[000457] Clause 8. The head- worn computer of clause 6, wherein the level of haptic feedback is variable based on an interaction with the rotary dial.

[000458] Clause 9. The head- worn computer of clause 6, wherein the level of haptic feedback is variable based on an interaction with direction selection device.

[000459] Clause 10. The head-worn computer of clause 6, wherein the level of haptic feedback is variable based on a direction of turn of the rotary dial.

[000460] Clause 11. The head-worn computer of clause 1, further comprising a haptic feedback system adapted to provide a level of haptic feedback corresponding to an activation of the direction selection device.

[000461] Clause 12. The head-worn computer of clause 11, wherein the haptic feedback is provided by a system that includes a plurality of haptic strips.

[000462] Clause 13. The head-worn computer of clause 11, wherein the level of haptic feedback is variable based on an interaction with the rotary dial.

[000463] Clause 14. The head-worn computer of clause 11, wherein the level of haptic feedback is variable based on an interaction with direction selection device.

[000464] Clause 15. The head-worn computer of clause 11, wherein the level of haptic feedback is variable based on a direction of turn of the rotary dial.

[000465] Clause 16. The head-worn computer of clause 1, wherein the rotary dial is adapted to accept a press towards the center of the dial as a selection of an item. [000466] Clause 17. The head-worn computer of clause 1, wherein the direction selection device is adapted to accept a type of interaction that causes the selection of an item in the graphical user interface.

[000467] Clause 18. The head-worn computer of clause 17, wherein the type of interaction is a length of selection time.

[000468] Clause 19. The head-worn computer of clause 17, wherein the type of interaction is a pattern of user interactions.

[000469] Clause 20. The head-worn computer of clause 19, wherein the pattern includes a plurality of quick activations.

[000470] Clause 21. A head-worn computer, comprising: a frame supporting a computer display, wherein the frame is adapted to position the computer display in front of an eye of a user when the frame is mounted on the head of the user; a physical user interface mounted on the frame, the physical user interface adapted to control an aspect of a computer generated environment presented in the computer display; a proximity detection system adapted to detect a user' s finger proximity with the frame relative to the physical user interface; and a processor adapted to generate a representation of at least a portion of the frame and an indication of the proximity.

[000471] Clause 22. The head-worn computer of clause 21, wherein the frame includes an arm to secure the frame to the head of the user, and wherein the physical user interface is mounted on the arm.

[000472] Clause 23. The head-worn computer of clause 21, wherein the physical user interface has multiple input systems.

[000473] Clause 24. The head-worn computer of clause 21, wherein the proximity detection system includes a capacitive sensor in close proximity with the physical user interface.

[000474] Clause 25. The head-worn computer of clause 21, wherein the proximity detection system includes a plurality of proximity detectors physically separated on the frame such that the representation is based on the plurality of proximity detectors.

[000475] Clause 26. The head-worn computer of clause 21, further comprising a haptic feedback system mounted on the frame, wherein the haptic feedback system provides a haptic feedback to the user based on the proximity.

[000476] Clause 27. The head-worn computer of clause 26, wherein the haptic feedback system provides a variable intensity haptic feedback depending on the proximity detection system's prediction of the user's interaction getting closer to the user interface. [000477] In some implementations, frames mechanically supporting head-worn computers may be described in the following clauses or otherwise described herein and as illustrated in Fig. 57.

[000478] CLAUSE SET B

[000479] Clause 1. A head- worn computer, comprising: a frame supporting a computer display, wherein the frame is adapted to position the computer display in front of an eye of a user when the frame is mounted on the head of the user; a physical user interface mounted on the frame, the physical user interface adapted to control an aspect of a computer generated environment presented in the computer display; a proximity detection system adapted to detect a user' s finger proximity with the frame relative to the physical user interface; and a processor adapted to generate a representation of at least a portion of the frame and an indication of the proximity.

[000480] Clause 2. The head- worn computer of clause 1, wherein the frame includes an arm to secure the frame to the head of the user, and wherein the physical user interface is mounted on the arm.

[000481] Clause 3. The head-worn computer of clause 1, wherein the physical user interface has multiple input systems.

[000482] Clause 4. The head- worn computer of clause 1, wherein the proximity detection system includes a capacitive sensor in close proximity with the physical user interface.

[000483] Clause 5. The head-worn computer of clause 1, wherein the proximity detection system includes a plurality of proximity detectors physically separated on the frame such that the representation is based on the plurality of proximity detectors.

[000484] Clause 6. The head- worn computer of clause 1, further comprising a haptic feedback system mounted on the frame, wherein the haptic feedback system provides a haptic feedback to the user based on the proximity.

[000485] Clause 7. The head-worn computer of clause 6, wherein the haptic feedback system provides a variable intensity haptic feedback depending on the proximity detection system' s prediction of the user' s interaction getting closer to the user interface.

[000486] Clause 8. A head-worn computer, comprising: a computer display; a frame mechanically adapted to position the computer display in front of an eye of a user when the head- worn computer is mounted on the head of the user; a physical user interface mounted on the frame; and a proximity detection system mounted on the frame and adapted to detect a position of a user interaction with the frame, wherein proximity detection system provides feedback to the user of the user's interaction position with respect to the physical user interface.

[000487] Clause 9. The head-worn computer of clause 8, wherein the feedback to the user is provided as a visual representation, presented in the computer display, of the position of the user's interaction with the frame.

[000488] Clause 10. The head-worn computer of clause 8, wherein the feedback to the user is provided as a haptic feedback.

[000489] Clause 11. The head-worn computer of clause 10, wherein the haptic feedback is variable depending on the user' s position relative to the physical user interface.

[000490] Clause 12. The head-worn computer of clause 8, wherein the proximity detection system is a capacitive system.

[000491] Clause 13. The head-worn computer of clause 8, wherein the proximity detection system is a photoelectric system.

[000492] In some implementations, frames mechanically supporting head-worn computers may be described in the following clauses or otherwise described herein and as illustrated in Fig. 57.

[000493] CLAUSE SET C

[000494] Clause 1. A head- worn computer, comprising: a frame mechanically adapted to position a see-through optical computer display in front of a user's eye, wherein the frame includes an arm adapted to hold the frame to the user's head; a rotary dial mounted on the arm and accessible to the user; a selection device mounted on the arm proximate the rotary dial; and the rotary dial adapted to control a volume of the head- worn computer's audio system and further adapted to control a brightness of content presented in the see-through optical computer display, wherein the selection device controls whether the volume or brightness are controlled by the rotary dial.

[000495] Clause 2. The head- worn computer of clause 1, wherein an indication is presented in the see-through optical display to inform the user of whether the volume or brightness is presently selected for control.

[000496] Clause 3. The head- worn computer of clause 1, wherein an indication is presented in the see-through optical display to inform the user of a setting of a selected controllable attribute.

[000497] In some implementations, frames supporting a finger tap action of a head- worn computer may be described in the following clauses or otherwise described herein and as illustrated in Fig. 59. [000498] CLAUSE SET D

[000499] Clause 1. A head- worn computer, comprising: an inertial measurement unit ("IMU") in communication with a processor; the processor adapted to identify a user tap control action, the user tap control action being a finger tap on a frame of the head- worn computer, wherein the finger tap is measured by the IMU; and the processor further adapted to control an aspect of a software application operating on the head-worn computer.

[000500] Clause 2. The head- worn computer of clause 1, further comprising a rotary user interface, mounted on an arm of the head-worn computer, wherein the rotary user interface controls an aspect of the software application in conjunction with the user tap control action.

[000501] Clause 3. The head- worn computer of clause 1, further comprising a touch surface user interface, mounted on an arm of the head-worn computer, wherein the touch surface user interface controls an aspect of the software application in conjunction with the user tap control action.

[000502] Clause 4. The head- worn computer of clause 1, wherein the user tap control action initiates a selection of an element in a graphical user interface presented on a see- through display of the head-worn computer.

[000503] Clause 5. A head-worn computer, comprising: a strain gauge adapted to measure strain on an area mounted on the head-worn computer; and the strain gauge associated with a processor, the processor adapted to interpret strain measurements from the strain gauge and to control an aspect of a software application operating on the head- worn computer.

[000504] Clause 6. The head- worn computer of clause 5, wherein the strain gauge includes a physical feature indicative of an area for user interaction.

[000505] Clause 7. The head-worn computer of clause 6, wherein the physical feature is a plurality of features indicative of different user controls, wherein the processor is adapted to control a different aspect of the software application based on which of the plurality of features is indicating user interaction.

[000506] Clause 8. The head-worn computer of clause 5, wherein the processor interprets a scroll action when the strain gauge measures a substantially linear swipe.

[000507] Clause 9. A head-worn computer, comprising: a proximity detection system positioned to identify when the head- worn computer has been mounted on a user' s head; a processor adapted to be in a sleep mode when the head- worn computer is not mounted on the user' s head, the processor monitoring the proximity detection system while in the sleep mode; and the processor further adapted to wake and turn on a see-through computer display mounted in the head-worn computer when the proximity sensor detects the user's head.

[000508] Clause 10. The head-worn computer of clause 9, wherein the proximity detection system is positioned to detect the presence of the user' s forehead.

[000509] Clause 11. The head-worn computer of clause 9, wherein the proximity detection system is mounted in an arm of the head-worn computer.

[000510] In some implementations, a method for synchronizing content from the cloud between multiple head-worn computers may be described in the following clauses or otherwise described herein and as illustrated in Fig. 115.

[000511] CLAUSE SET E

[000512] Clause 1. A method for synchronizing content from the cloud between multiple head- worn computers to provide a synchronized experience to multiple users of head-worn computers, comprising: linking the multiple users to the same access point in the cloud; identifying how many head-worn computers will be included in the synchronized experience of the content, the multiple users indicating to the cloud that they would like to participate in a synchronized experience of the content; downloading the content to the multiple head- worn computers from the cloud; polling the multiple head-worn computers to determine the percentage of the content that has been downloaded to each of multiple head- worn computers; and when all of the multiple head- worn computers has exceeded a predetermined percentage of content that has been downloaded, sending a start command to each of the head-worn computers simultaneously to begin a synchronized presentation of the content to all of the multiple users.

[000513] Clause 2. The method of clause 1, wherein the content is selected by each of the multiple users.

[000514] Clause 3. The method of clause 1, further comprising verifying that the multiple user are all in a same location.

[000515] Clause 4. The method of clause 3, wherein the verifying includes determining that the multiple head-worn computers share a common IP address.

[000516] Clause 5. The method of clause 3, wherein the verifying includes determining that the multiple head-worn computers have the same GPS location.

[000517] Clause 6. The method of clause 5, wherein the verifying includes determining that the multiple head-worn computers have had compass headings that are opposite to at least one other head- worn computer within a period of time, thereby indicating that the users have looked at one another. [000518] Clause 7. The method of clause 1, further comprising the steps of: the cloud supplying the content with time stamps; and the cloud polling the multiple head- worn computers to determine whether the content is synchronized.

[000519] Clause 8. The method of clause 7, wherein if the content is determined to not be synchronized as presented on one of the head-worn computers, a portion of the content is repeated or removed on the one head-worn computer to improve the synchronization of the content between the multiple head-worn computers.

[000520] Clause 9. The method of clause 1, wherein if the link to the cloud is lost, one of the head-worn computers is selected as a host and the other head-worn computers in the group are designated as clients, and the host provides synchronized content to the clients.

[000521] Clause 10. The method of clause 9, wherein the selecting is accomplished by communication between the multiple head- worn computers to determine which of the head- worn computers has downloaded the most of the content.

[000522] Clause 11. The method of clause 1, wherein the multiple users download an application that controls the steps included in the method of synchronizing content from the cloud.

[000523] Clause 12. The method of clause 1, wherein the content includes one or more of the following: movies, games or interactive experiences with imagery and haptic stimulation.

[000524] Clause 13. A method of simultaneously sharing content between multiple users of head- worn computers so that the content is presented to the multiple users in a synchronized fashion, comprising: a user downloading content to one of the head-worn computers; identifying the one head-worn computer as a host for sharing the content; one or more other head-worn computers indicating that they would like to share the content as clients; the host polling adjacent head- worn computers to identify which head-worn computers the content is to be shared with; the host downloading the content to the identified head- worn computers; and the host simultaneously initiating a start of the content in the identified head-worn computers.

[000525] Clause 14. The method of clause 13, wherein the step of identifying which head- worn computers the content is to be shared with includes the step of identifying head- worn computers with the same IP address.

[000526] Clause 15. The method of clause 13, wherein the step of identifying which head- worn computers the content is to be shared with includes the step of identifying head- worn computers with the same GPS address. [000527] Clause 16. The method of clause 15, wherein the step of identifying which head- worn computers the content is to be shared with, includes the step of identifying which head- worn computers have had opposite compass headings to the host within a period of time, thereby indicating that the users have looked at one another.

[000528] Clause 17. The method of clause 13, further comprising the steps of:

providing, from the host, the content with time stamps; and polling, by the host, the clients to determine whether the content is synchronized as presented on the clients' head-worn displays.

[000529] Clause 18. The method of clause 17, wherein if the content is determined to not be synchronized as presented on one of the head-worn computers, a portion of the content is repeated or removed on the one of the head-worn computers to improve the

synchronization of the content between the multiple head- worn computers.

[000530] In some implementations, an optical assembly for displaying an image in a head- worn display may be described in the following clauses or otherwise described herein and as illustrated in Fig. 77.

[000531] CLAUSE SET F

[000532] Clause 1. An optical assembly for displaying an image in a head- worn display comprising: an image source providing image light; display optics including multiple elements with optical surfaces that are cemented together with one or more transparent adhesives; wherein the cemented display optics include multiple internal optical surfaces comprising at least one refractive surface supplying optical power and at least two partially reflective surfaces; and the cemented display optics present the image light in a display field of view to display the image to a user.

[000533] Clause 2. The optical assembly of clause 1 wherein the transparent adhesive is index matched to at least one of the elements.

[000534] Clause 3. The optical assembly of clause 1 wherein the cemented display optics provide the user with a see-through view of a surrounding environment.

[000535] Clause 4. The optical assembly of clause 3 wherein the see-through view can pass through all of the multiple elements.

[000536] Clause 5. The optical assembly of clause 1 wherein the multiple elements include at least two optical materials.

[000537] Clause 6. The optical assembly of clause 1 wherein at least two of the cemented optical surfaces are spherical. [000538] Clause 7. The optical assembly of clause 1 wherein the cemented display optics comprise a pre-assembled optic.

[000539] Clause 8. The optical assembly of clause 7 wherein the pre-assembled optic is installed into a frame along with the image source.

[000540] Clause 9. The optical assembly of clause 8 wherein the frame rigidly holds the image source in relation to the pre-assembled optic.

[000541] Clause 10. The optical assembly of clause 9 wherein the frame rigidly holds a left pre-assembled optic and a right preassembled optic relative to the left and right eyes of the user.

[000542] Clause 11. The optical assembly of clause 4 wherein the cemented display optics are uniform in thickness so the see-through view of the surrounding environment is undistorted.

[000543] Clause 12. The optical assembly of clause 11 wherein the multiple elements include elements that are designed with first surfaces that match other elements and second surfaces that are piano to provide the uniform thickness of the cemented display optics.

[000544] Clause 13. The optical assembly of clause 1 wherein the cemented display optics include at least one external refractive surface that supplies optical power.

[000545] Clause 14. The optical assembly of clause 1 wherein one of the reflective surfaces supplies optical power.

[000546] Clause 15. The optical assembly of clause 1 wherein at least one of the elements is provided with an airgap to an adjacent element and the airgap is supported by a special flange associated with the element that is adhesively bonded to the adjacent element.

[000547] Clause 16. The optical assembly of clause 7 wherein the pre-assembled optic includes sides that flare outward toward a front surface to better match a line of sight from the user' s eye.

[000548] Clause 17. The optical assembly of clause 7 wherein one or more sides of the pre-assembled optic is blackened to reduce stray light.

[000549] Clause 18. The optical assembly of clause 1 wherein the multiple elements include at least, a field lens, a power lens, a front lens and a prism.

[000550] Clause 19. The optical assembly of clause 1 wherein at least one of the partially reflective surfaces is a notch mirror.

[000551] Clause 20. The optical assembly of clause 19 wherein one of the multiple elements is a glass element and the notch mirror is associated with a surface of the glass element. [000552] Clause 21. The optical assembly of clause 5 wherein at least one of the elements has a refractive index that is at least 0.05 greater than the other elements and at least two of the other elements have refractive indices that are the same within < 0.05.

[000553] Clause 22. The optical assembly of clause 11 wherein different portions of the see-through view pass through elements with different refractive indices.

[000554] Clause 23. The optical assembly of clause 11 wherein a portion of the see- through view passes through multiple elements with refractive indices that are the same within 0.05.

[000555] In some implementations, an optical assembly for a head-worn display that has improved manufacturability may be described in the following clauses or otherwise described herein and as illustrated in Fig. 77.

[000556] CLAUSE SET G

[000557] Clause 1. An optical assembly for a head- worn display that has improved manufacturability, comprising: multiple elements with pairs of matching optical surface that are attached together with one or more index matched materials to provide a solid optical assembly; wherein within each pair of matching optical surfaces, one surface establishes an accurate optical surface and the other surface is filled by the index matched material; and wherein the multiple elements include self aligning features to guide the alignment of the multiple elements as they are attached together.

[000558] Clause 2. The optical assembly of clause 1 wherein at least one of the multiple elements in the optical assembly includes mounting features for attaching the optical assembly into a frame.

[000559] Clause 3. The optical assembly of clause 2 wherein the mounting features align left and right optical assemblies as they are attached to the frame so that images provided by the left and right eye of a user and the mounting features al.

[000560] Clause 4. The optical assembly of clause 1 further comprising alignment features to support and position a separate optical element adjacent to the solid optical assembly to enable a wider display field of view.

[000561] Clause 5. The optical assembly of clause 4 wherein the alignment features are molded with one or more of the multiple elements.

[000562] Clause 6. The optical assembly of clause 4 wherein an air gap separates the solid optical assembly and the separate optical element.

[000563] Clause 7. The optical assembly of clause 1 further comprising a corrective ophthalmic element adjacent to a rear surface of the solid optical assembly. [000564] Clause 8. The optical assembly of clause 7 wherein the corrective ophthalmic element is aligned relative to the solid optical assembly and adhesively bonded to the solid optical assembly.

[000565] Clause 9. The optical assembly of clause 7 wherein the corrective ophthalmic element is physically held in place and aligned relative to the solid optical assembly by mechanical features associated with a frame of the head-worn display.

[000566] Clause 10. The optical assembly of clause 7 wherein the corrective ophthalmic element and the solid optical assembly include interlocking features to align them relative to one another.

[000567] Clause 11. The optical assembly of clause 1 wherein the cemented optical assembly comprising a preassembled optic that is installed into a frame of the head- worn display.

[000568] Clause 12. The optical assembly of clause 1 wherein the self aligning features include at least one special flange associated with an element that aligns adjacent elements with each other and creates an airgap between the adjacent elements.

[000569] Clause 13. The optical assembly of clause 1 wherein at least one of the index matched materials is a transparent adhesive.

[000570] Clause 14. The optical assembly of clause 1 wherein at least one of the index matched materials is a transparent gel.

[000571] Clause 15. The optical assembly of clause 1 wherein at least one of the index matched materials is a transparent oil.

[000572] Clause 16. The optical assembly of clause 1 wherein the multiple elements include at least a field lens, a power lens, a front lens and a prism.

[000573] Clause 17. The optical assembly of clause 1 wherein the solid optical assembly includes at least one internal refractive surface and at least two internal partially reflective surfaces.

[000574] Clause 18. The optical assembly of clause 17 wherein the elements are comprised of at least two different materials.

[000575] Clause 19. The optical assembly of clause 17 wherein the solid optical assembly includes at least one external refractive surface.

[000576] Clause 20. The optical assembly of clause 18 wherein at least one of the elements is comprised of a glass material with a surface that includes a partially reflective surface that is a notch mirror. [000577] Clause 21. The optical assembly of clause 20 wherein the index matched material associated with the glass element is a transparent oil or gel and the glass element is held in alignment with the other elements by adjacent features or structures.

[000578] In some implementations, robust optics for a head-worn display may be described in the following clauses or otherwise described herein and as illustrated in Fig. 77.

[000579] CLAUSE SET H

[000580] Clause 1. Robust optics for a head-worn display, comprising an optical assembly comprised of multiple elements that are aligned relative to one another and then adhesively bonded together to preserve the alignment in an adhesively bonded optic; wherein the adhesively bonded optic includes at least one internal refractive surface and two or more internal reflective surfaces; and wherein the internal partially reflective surfaces comprise partially reflective surfaces that are protected by other elements in the adhesively bonded optic.

[000581] Clause 2. The robust optics of clause 1 wherein the adhesively bonded optic comprises a pre-assembled optic that is installed and affixed into a frame of the head- worn display.

[000582] Clause 3. The robust optics of clause 2 further comprising an image source that is aligned relative to the adhesively bonded optic.

[000583] Clause 4. The robust optics of clause 3 wherein the frame aligns the adhesively bonded optic and the image source relative to a user' s eye to display an image to the user' s eye.

[000584] Clause 5. The robust optics of clause 4 comprises two adhesively bonded optics and two image sources in a frame to display images to both of the user's eyes.

[000585] Clause 6. The robust optics of clause 1 further comprising a corrective ophthalmic element in a holder that rigidly clips to the edges of the adhesively bonded optic.

[000586] Clause 7. The robust optics of clause 1 further comprising a corrective ophthalmic element that is adhesively bonded to the back surface of the adhesively bonded optic.

[000587] Clause 8. The robust optics of clause 1 wherein the adhesively bonded optic further comprises at least three internal refractive surfaces with an airgap between two of the refractive surfaces.

[000588] Clause 9. The robust optics of clause 8 further comprises a special flange associated with one of the elements that positions the element in relation to one or more adjacent elements and also seals the edges of the airgap thereby preventing dust from entering the airgap.

[000589] Clause 10. The robust optics of clause 1 wherein one of the internal partially reflective surfaces is protected by a glass element.

[000590] Clause 11. The robust optics of clause 10 wherein at least one of the internal partially reflective surfaces is a notch mirror associated with a surface of the glass element.

[000591] Clause 12. The robust optics of clause 1 wherein the multiple elements at least include four elements.

[000592] Clause 13. The robust optics of clause 8 wherein the multiple elements at least include five elements.

[000593] Clause 14. The robust optics of clause 1 wherein the adhesively bonded optic has a uniform thickness

[000594] Clause 15. The robust optics of clause 14 wherein the head-worn display displays an image to a user's eye and the user simultaneously receives a see-through view of a surrounding environment by looking through the adhesively bonded optic.

[000595] In some implementations, optics for displaying images in head-worn display along with a see-through view of a surrounding environment that has an increased vertical see-through field of view may be described in the following clauses or otherwise described herein and as illustrated in Fig. 77.

[000596] CLAUSE SET I

[000597] Clause 1. Optics for displaying images in head- worn display along with a see-through view of a surrounding environment that has an increased vertical see-through field of view, comprising: an image source providing image light; display optics including multiple elements with optical surfaces that are cemented together with one or more transparent adhesives to provide a solid display optic; wherein the solid display optic includes multiple internal optical surfaces comprising at least one refractive surface supplying optical power and at least two reflective surfaces; and the solid display optic provides a uniform thickness across the multiple elements through which a user is provided a view of the surrounding environment that has a larger vertical see-through field of view than the vertical display field of view that is provided the solid display optic.

[000598] Clause 2. The optics of clause 1 wherein portions of the view of the surrounding environment are provided through all of the multiple elements. [000599] Clause 3. The optics of clause 1 wherein the multiple elements include multiple elements made of materials with very similar refractive indices and at least one element made of a material with a different refractive index.

[000600] Clause 4. The optics of clause 3 wherein the materials with very similar refractive indices have indices within < 0.05 of each other and the materials with a different refractive index is > 0.05 of the other materials.

[000601] Clause 5. The optics of clause 2 wherein a first portion of the view of the surrounding environment passes through two or more of the elements and a second portion of the view of the surrounding environment passes through only one element.

[000602] Clause 6. The optics of clause 1 wherein the solid display optic has a uniform thickness through which a user views the see-through view of the surrounding environment.

[000603] Clause 7. The optics of clause 6 wherein the uniform thickness solid display optic provides an undistorted view of the surrounding environment to a user.

[000604] Clause 8. The optics of clause 7 wherein distortion in the view of the surrounding environment is less than 0.5 degree.

[000605] Clause 9. The optics of clause 2 wherein the image light passes through some, but not all of the elements.

[000606] In some implementations, an optical module for displaying an image in a head-worn display wherein the optical module has reduced thickness may be described in the following clauses or otherwise described herein and as illustrated in Fig. 77.

[000607] CLAUSE SET J

[000608] Clause 1. An optical module for displaying an image in a head-worn display wherein the optical module has reduced thickness, comprising: an image source providing image light; display optics including multiple elements with optical surfaces that are cemented together with a transparent adhesive to provide a solid display optic; wherein the solid display optic includes multiple internal optical surfaces comprising at least one refractive surface supplying optical power and at least two partially reflective surfaces; wherein after the image light has been refracted and reflected by the internal surfaces, it undergoes a refractive effect as it exits the solid display optics such that the subtended angle associated with the image light is increased after exit; and the increased subtended angle of the image light after exit is associated with an increased display field of view, while the thickness of the solid display optic is reduced relative to the display field of view because the subtended angle of the image light is less while the image light is contained within the solid display optic.

[000609] Clause 2. The optical module of clause 1 wherein at least one partially reflective surface is curved to provide optical power to the image light.

[000610] Clause 3. The optical module of clause 2 wherein the at least one curved partially reflective surface has a larger radius so the curved partially reflective surface has less sag, because it is positioned internal to the solid display optic thereby reducing the thickness of the solid display optic.

[000611] Clause 4. The optical module of clause 1 wherein at least one footprint of the image light on a partially reflective surface is reduced in area because the partially reflective surface is internal to the solid display optic.

[000612] Clause 5. The optical module of clause 4 wherein one of the partially reflective surfaces is at an angle to the thickness of the solid display optic so that a reduction in the area of the image light footprint enables a reduction in the thickness of the solid display optic.

[000613] Clause 6. The optical module of clause 1 wherein the display optics are arranged with a folded optical path to reduce the overall size.

[000614] Clause 7. The optical module of clause 2 wherein the curved partially reflective surface is associated with a prismatic element.

[000615] Clause 8. The optical module of clause 2 wherein the curved partially reflective surface is associated with a piano element that mates with a prismatic element.

[000616] Clause 9. The optical module of clause 8 wherein curved partially reflective surface is a notch mirror.

[000617] Clause 10. The optical module of clause 8 wherein the solid display optic includes at least four elements cemented together.

[000618] Clause 11. The optical module of clause 1 wherein the solid display optic further comprises a separate optical element supported by one or more flanges so that an air gap separates the separate optical element the remaining elements in the solid display optic.

[000619] Clause 12. The optical module of clause 11 wherein the flanges also align the separate optical element with the remaining elements in the solid display optic.

[000620] In some implementations, a solid optics module for displaying an image in a head-worn display that also provides a see-through view of a surrounding environment wherein the solid optics module provides an increased display field of view may be described in the following clauses or otherwise described herein and as illustrated in Fig. 77. [000621] CLAUSE SET K

[000622] Clause 1. A solid optics module for displaying an image in a head- worn display that also provides a see-through view of a surrounding environment wherein the solid optics module provides an increased display field of view, comprising: an image source providing image light; three or more lens elements with optical surfaces that are attached together with one or more index matched transparent materials to provide a solid display optic that provides a see-through view of the surrounding environment; wherein the solid display optic includes multiple internal optical surfaces comprising at least one refractive surface that supplies optical power to the image light, at least one partially reflective surface that also supplies optical power to the image light and at least one piano partially reflective surface that redirects the image light; at least one separate optical element positioned between the image source and the solid display optic that supplies optical power to the image light and is supported such that an airgap is provided between the separate optical element and the solid display optic to provide a solid optics module; and wherein the solid optics module provides an increased display field of view.

[000623] Clause 2. The solid optics module of clause 1 wherein the increased display field of view is greater than 35 degrees.

[000624] Clause 3. The solid optics module of clause 1 wherein the increased display field of view is 40 degrees or greater.

[000625] Clause 4. The solid optics module of clause 1 wherein support for the separate optical element is provided by a frame of the head- worn display.

[000626] Clause 5. The solid optics module of clause 1 wherein support for the separate optical element is provided by a special flange associated with an element in the solid display optic.

[000627] Clause 6. The solid optics module of clause 5 wherein the special flange supports and aligns the separate optical element relative to the solid display optic.

[000628] Clause 7. The solid optics module of clause 5 wherein the special flange is adhesively bonded to attach the separate optical element to the solid display optic thereby providing an extended solid display optic.

[000629] Clause 8. The solid optics module of clause 5 wherein the special flange extends around one or more edges of the separate optical element and the solid display optic to prevent dust from entering the airgap. [000630] Clause 9. The solid optics module of clause 1 wherein the solid display optic includes at least three elements and the see-through view passes through all of the elements of the solid display optic.

[000631] Clause 10. The solid optics module of clause 1 wherein the solid display optic includes at least four elements and the see-through view passes through all of the elements of the solid display optic.

[000632] Clause 11. The solid optics module of clause 10 wherein the four elements include a middle element, a power lens, a front lens and a prism.

[000633] Clause 12. The solid optics module of clause 1 wherein the solid display optic has a uniform thickness.

[000634] Clause 13. The solid optics module of clause 11 wherein a front surface of the front lens and a back surface of the prism are both piano.

[000635] Clause 14. The solid optics module of clause 11 wherein a front surface of the front lens and a back surface of the prism have concentric curves.

[000636] Clause 15. The solid optics module of clause 1 wherein the sides of the solid display optic are flared to provide an improved see-through view.

[000637] In some implementations, a head-worn computer with a multi-piece solid optical module may be described in the following clauses or otherwise described herein and as illustrated in Fig. 77.

[000638] CLAUSE SET L

[000639] Clause 1. A head- worn computer, comprising: a multi -piece solid optical module, wherein the multi-piece solid optic includes a plurality of materials adapted to transmit image light to an eye of a user and further adapted to provide the user with a see- through view of a surrounding environment; the multi-piece solid optical module further including a substantially flat vertical outer surface; and an ophthalmologic corrective optic adapted to be removably and replacably mounted to the substantially flat vertical outer surface.

[000640] Clause 2. The head- worn computer of clause 1, wherein the ophthalmologic corrective optic includes a substantially flat surface that substantially aligns with the multi- piece solid optical module substantially flat vertical outer surface.

[000641] Clause 3. The head- worn computer of clause 1, wherein the removability and replacability is based on a mechanical attachment system.

[000642] Clause 4. The head- worn computer of clause 1, wherein the removability and replacability is based on a magnetic attachment system. [000643] Clause 5. The head-worn computer of clause 3 wherein the mechanical attachment system includes a clip that attaches to the sides of the solid optical module.

[000644] In some implementations, a head-worn computer with a solid optics module may be described in the following clauses or otherwise described herein and as illustrated in Fig. 77.

[000645] CLAUSE SET M

[000646] Clause 1. A solid optics module for displaying an image in a display field of view within a head- worn display that also provides a see-through view of a surrounding environment wherein the solid optics module provides increased efficiency, comprising: an image source providing image light; three or more lens elements with optical surfaces that are attached together with one or more index matched transparent adhesives to provide a solid display optic that provides a see-through view of the surrounding environment; wherein the solid display optic includes multiple internal optical surfaces comprising at least one refractive surface that supplies optical power to the image light, at least one reflective surface that also supplies optical power to the image light and at least one piano partially reflective surface that redirects a portion of the image light toward an eyebox; and wherein the reflective surface that supplies optical power is positioned at the bottom of the solid display optic.

[000647] Clause 2. The solid optics module of clause 1 wherein the reflective surface has greater than 90% reflectivity of the image light.

[000648] Clause 3. The solid optics module of clause 1 wherein the piano partially reflective surface is a dielectric partial mirror coating.

[000649] Clause 4. The solid optics module of clause 3 wherein the dielectric partial mirror has 20 to 50% reflectivity and 80 to 50% transmission of the image light.

[000650] Clause 5. The solid optics module of clause 1 wherein the solid display optic is comprised of an upper lens, an upper prism element and a lower prism element.

[000651] Clause 6. The solid optics module of clause 5 wherein the upper lens includes two or more refractive elements comprised of at least two different materials with refractive indices that differ from one another by at least 0.05 to provide an increased display field of view.

[000652] Clause 7. The solid optics module of clause 6 wherein the display field of view is 40 degrees or greater.

[000653] Clause 8. The solid optics module of clause 6 wherein the solid display optic includes at least two internal refractive surfaces. [000654] Clause 9. The solid optics module of clause 1 wherein the solid display optic has a uniform thickness to provide an undistorted see-through view of the surrounding environment.

[000655] Clause 10. The solid optics module of clause 5 wherein the upper prism element and the lower prism element are designed to have the same shape and same material.

[000656] Clause 11. The solid optics module of clause 1 wherein the image light is polarized and the piano partially reflective surface is a reflective polarizer.

[000657] Clause 12. The solid optics module of clause 11 wherein a quarter wave film is included adjacent to the reflective surface that supplies optical power.

[000658] In some implementations, a head- worn computer with a see-through display that generates image light comprising narrow bandwidths of red, green and blue light may be described in the following clauses or otherwise described herein and as illustrated in Fig. 95b.

[000659] CLAUSE SET N

[000660] Clause 1. A head- worn computer, comprising:

[000661] a see-through display wherein computer content is presented to a user wearing the head- worn computer and through which the user sees a surrounding

environment, wherein the see-through display generates image light comprising narrow bandwidths of red, green and blue light and wherein the see-through display further includes a tristimulus notch mirror positioned to reflect the image light towards the user' s eye, and wherein the tristimulus notch mirror reflects less than a full width half max of the red image light.

[000662] Clause 2. The head- worn computer of clause 1, wherein the image light is generated with an emissive display.

[000663] Clause 3. The head-worn computer of clause 1, wherein the image light is generated with a reflective display.

[000664] In some implementations, a head-worn apparatus with a speaker system may be described in the following clauses or otherwise described herein and as illustrated in Fig. 14ka.

[000665] CLAUSE SET O

[000666] Clause 1. A head- worn apparatus, comprising: an arm adapted to secure the head- worn apparatus to a user's head; a speaker system mounted in the arm and positioned to emit sound through an opening in the arm; and an audio extension tube adapted to be positioned proximate the opening in the arm and to transfer the emitted sound towards the user' s head.

[000667] In some implementations, an optic adapted to be mounted on a head-worn computer may be described in the following clauses or otherwise described herein and as illustrated in Fig. 66.

[000668] CLAUSE SET P

[000669] Clause 1. An apparatus, comprising: an optic adapted to be mounted on a head- worn computer, wherein the optic has a plurality of reflective features; a light source positioned to transmit light into an edge of the optic such that the light is transmitted along an interior portion of the optic until it interferes with the plurality of reflective features, wherein following the interference, the internally transmitted light reflects out of the optic in a pattern corresponding to a position of each of the plurality of reflective features towards the eye of a user of the head- worn computer; and a camera positioned to capture the light that is reflected off of the eye of the user.

[000670] Clause 2. A modular apparatus, comprising: an optic adapted to be removeably and replacably mounted on a head- worn computer, wherein the optic has a plurality of reflective features; a light source positioned to transmit light into an edge of the optic such that the light is transmitted along an interior portion of the optic until it interfers with the plurality of reflective features, wherein following the interference, the internally transmitted light reflects out of the optic in a pattern corresponding to a position of each of the plurality of reflective features towards the eye of a user of the head- worn computer; and a camera positioned to capture the light that is reflected off of the eye of the user.

[000671] Clause 3. A modular apparatus, comprising: an optic adapted to be removeably and replacably mounted on a head- worn computer, wherein the optic has a plurality of reflective features; a light source positioned to transmit light into an edge of the optic such that the light is transmitted along an interior portion of the optic until it interfers with the plurality of reflective features, wherein following the interference, the internally transmitted light reflects out of the optic in a pattern corresponding to a position of each of the plurality of reflective features towards the eye of a user of the head- worn computer; and a camera removeably and replaceably mounted to the head- worn computer positioned to capture the light that is reflected off of the eye of the user.

[000672] Clause 4. A modular apparatus, comprising: an optic adapted to be removeably and replacably mounted on a head- worn computer, wherein the optic has a plurality of reflective features; a light source positioned to transmit light into an edge of the optic such that the light is transmitted along an interior portion of the optic until it interfers with the plurality of reflective features, wherein following the interference, the internally transmitted light reflects out of the optic in a pattern corresponding to a position of each of the plurality of reflective features towards the eye of a user of the head- worn computer; and a camera mounted on a nose bridge wherein the nose bridge is removably and replaceably mounted to the head-worn computer positioned to capture the light that is reflected off of the eye of the user.

[000673] In some implementations, a head-worn computer with an electrical connector adapted to electrically connect with a modular expansion module may be described in the following clauses or otherwise described herein and as illustrated in Fig. 61.

[000674] CLAUSE SET Q

[000675] Clause 1. A head- worn computer, comprising: an electrical connector adapted to electrically connect with a modular expansion module, wherein the modular expansion module adds a capability to the head- worn computer and is removably mounted to the head- worn computer; and a mount adapted to physically secure the modular expansion module to the head- worn computer.

[000676] Clause 2. The head- worn computer of clause 1, wherein the mount comprises a magnetic element to physically secure the modular expansion module.

[000677] Clause 3. The head- worn computer of clause 1, wherein the mount comprises a snap fit element to physically secure the modular expansion module.

[000678] Clause 4. The head- worn computer of clause 1, wherein the mount comprises an alignment element to physically align the modular expansion module on the head- worn computer.

[000679] Clause 5. The head- worn computer of clause 1, further comprising, a processor adapted to recognize a function of the modular expansion module, when connected to the modular expansion module, such that the modular expansion module operates in accordance with a schema identified for the function.

[000680] Although embodiments of HWC have been described in language specific to features, systems, computer processes and/or methods, the appended claims are not necessarily limited to the specific features, systems, computer processes and/or methods described. Rather, the specific features, systems, computer processes and/or and methods are disclosed as non-limited example implementations of HWC. All documents referenced herein are hereby incorporated by reference. [000681] While many of the embodiments herein describe see-through computer displays, the scope of the disclosure is not limited to see-through computer displays. In embodiments, the head-worn computer may have a display that is not see-through. For example, the head-worn computer may have a sensor system (e.g. camera, ultrasonic system, radar, etc.) that images the environment proximate the head-worn computer and then presents the images to the user such that the user can understand the local environment through the images as opposed to seeing the environment directly. In embodiments, the local environment images may be augmented with additional information and content such that an augmented image of the environment is presented to the user. In general, in this disclosure, such see-through and non-see through systems may be referred to as head-worn augmented reality systems, augmented reality displays, augmented reality computer displays, etc.