Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
NEAR-EYE LIGHT-FIELD DISPLAY SYSTEM
Document Type and Number:
WIPO Patent Application WO/2016/176207
Kind Code:
A1
Abstract:
A near-eye light field display for use with a head mounted display unit with enhanced resolution and color depth. A display for each eye is connected to one or more actuators to scan each display, increasing the resolution of each display by a factor proportional to the number of scan points utilized. In this way, the resolution of near- eye light field displays is enhanced without increasing the size of the displays.

Inventors:
GUTIERREZ ROMAN (US)
Application Number:
PCT/US2016/029365
Publication Date:
November 03, 2016
Filing Date:
April 26, 2016
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MEMS START LLC (US)
International Classes:
G02B6/02; G02F1/21; H04N5/64
Foreign References:
US20110007277A12011-01-13
US8290358B12012-10-16
US20110012874A12011-01-20
US20140240578A12014-08-28
US20120068913A12012-03-22
Attorney, Agent or Firm:
HEISEY, David, E. (12275 Ei Camino Real Suite 20, San Diego CA, US)
Download PDF:
Claims:
Claims

What is claimed is:

1. A head mounted display, comprising:

an array of lenses comprising a plurality of light field lenses;

an array of displays comprising a plurality of display components, each display component comprising a light source disposed on a circuit board, at least one display component including a light source disposed on an actuator;

an exterior housing supporting the array of lenses, the exterior housing connected to an outside edge of the array of lenses;

an interior housing supporting the array of light source displays, the array of light source arrays disposed on a top surface of the interior housing;

wherein the array of lenses is disposed at a fixed distance from the array of displays, and each light field lens of the plurality of light field lenses is parallel to at least one display component of the array of displays; and

wherein an actuator control component is communicatively coupled to the array of displays and a processor unit, and configured to move the at least one light source disposed on the actuator in accordance with a scan pattern.

2. The head mounted display of claim 1, wherein the processor unit is configured to synchronize the illumination of a plurality of pixels of the light source with the scan pattern.

3. The head mounted display of claim 1, wherein the light source comprises a OLED.

4. The head mounted display of claim 1, wherein the light source comprises one of: an LED; an LCD; a plasma display.

5. The head mounted display of claim 1, wherein the scan pattern comprises a raster scan pattern configured to scan the one or more actuators in plane.

6. The head mounted display of claim 1, wherein the scan pattern results in a Lissajous curve.

7. The head mounted display of claim 1, further comprising one or more cameras housed in the exterior housing, each camera having a field of view encompassing a portion of a view of a user, and wherein the processor is further configured to compute a light field representation.

8. The head mounted display of claim 7, wherein the one or more cameras comprise one or more light field cameras.

9. The head mounted display of claim 7, wherein the one or more cameras comprises a motorized focus.

10. The head mounted display of claim 1, each display component comprising one or more focus sensors disposed on a surface of the display component between a plurality of pixels of the light source on the surface of the display component.

11. The head mounted display of claim 1, wherein the scan pattern comprises a depth scan pattern configured to scan the at least one actuator in the Z-axis.

12. The head mounted display of claim 1, wherein an opaque mask is disposed at a transition point between each light field lens of the array of lenses, wherein the transition point comprises a point where an edge of a first light field lens meets an edge of a second light field lens.

13. A head mounted display, comprising:

a light field lens;

aa display component comprising a light source disposed on an actuator;

an exterior housing supporting the light field lens;

an interior housing supporting the display component disposed on a top surface of the interior housing;

wherein the light field lens is disposed opposite the display component, and parallel to the display component;

a vertical motion actuator disposed between the interior housing and the exterior housing such that, when activated, the interior housing moves in a vertical direction relative to the exterior housing to increase or decrease the distance between the light field lens and the display component; and

wherein an actuator control component is communicatively coupled to the display component and a processor unit, and configured to move the light source dispose on the actuator laterally and the vertical motion actuator vertically in accordance with a scan pattern.

14. The head mounted display of claim 13, wherein the processor unit is configured to synchronize the illumination of a plurality of pixels of the light source with the scan pattern.

15. The head mounted display of claim 13, wherein the light source comprises an OLED.

16. The head mounted display of claim 13, wherein the light source comprises one of: an LED; an LCD; a plasma display.

17. The head mounted display of claim 13, wherein the scan pattern comprises a raster scan pattern configured to scan the one or more actuators in-plane.

18. The head mounted display of claim 13, wherein the scan pattern results in a Lissajous curve.

19. The head mounted display of claim 13, further comprising one or more cameras housed in the exterior housing, each camera having a field of view encompassing a portion of a view of a user, and wherein the processor is further configured to compute a light field representation.

20. The head mounted display of claim 19, wherein the one or more cameras comprise one or more light field cameras.

21. The head mounted display of claim 19, wherein the one or more cameras comprises a motorized focus.

22. The head mounted display of claim 13, the display component comprising one or more focus sensors disposed on a surface of the display component between a plurality of pixels of the light source on the surface of the display component.

23. The head mounted display of claim 13, wherein the scan pattern comprises a depth scan pattern configured to scan the vertical motion actuator in the Z-axis.

24. The head mounted display of claim 13, further comprising an array of lenses comprising a plurality of light field lenses, an array of displays comprising a plurality of display components, at least one display component including a light source disposed on an actuator.

25. The head mounted display of claim 24, wherein an opaque mask is disposed at a transition point between each light field lens of the array of lenses, wherein the transition point comprises a point where an edge of a first light field lens meets an edge of a second light field lens.

Description:
NEAR-EYE LIGHT-FIELD DISPLAY SYSTEM

Technical Field

[0001] The disclosed technology relates generally to near-eye displays, and more particularly, some embodiments relate to near-eye systems having light-field displays.

Description of the Related Art

[0002] Head-mounted displays ("HMDs") are generally configured such that one or more displays are placed directly in front of a person's eyes. HMDs have been utilized in various applications, including gaming, simulation, and military uses. Traditionally, HMDs have comprised heads-up displays, wherein the user focuses on the display in front of the eyes, as images are traditionally displayed on a two-dimensional ("2D") surface. Optics are used to make the display(s) appear farther away than it actually is, in order to allow for a suitable display size to be utilized so close to the human eye. Despite the use of optics, however, HMDs generally have low resolution because of trade-offs related to the overall weight and form factor of the HMD, as well as pixel pitch. Brief Summary of Embodiments

[0003] According to various embodiments of the disclosed technology, a head mounted display for generating light field representations is provided. The head mounted display comprises an array of lenses (comprising a plurality of light field lenses) positioned opposite and parallel to an array of displays (comprising a plurality of light sources). The array of lenses may be configured to capture light rays from one or more light sources of the array of displays to generate a near-eye light field representation. The head mounted display may include an exterior housing configured to support the edge of the array of lenses, and an interior housing configured to support the array of displays disposed on a surface of the interior housing. In some embodiments, the exterior housing and the interior housing may be positioned such that the distance between the array of lenses and the array of displays remains fixed. In other embodiments, a vertical motion actuator may be disposed between the interior housing and the exterior housing such that the interior housing may be moved vertically relative to the exterior housing to increase or reduce the distance between the two arrays, or vice versa.

[0004] Other features and aspects of the disclosed technology will become apparent from the following detailed description, taken in conjunction with the accompanying drawings, which illustrate, by way of example, the features in accordance with embodiments of the disclosed technology. The summary is not intended to limit the scope of any inventions described herein, which are defined solely by the claims attached hereto. Brief Description of the Drawings

[0005] The technology disclosed herein, in accordance with one or more various embodiments, is described in detail with reference to the following figures. The drawings are provided for purposes of illustration only and merely depict typical or example embodiments of the disclosed technology. These drawings are provided to facilitate the reader's understanding of the disclosed technology and shall not be considered limiting of the breadth, scope, or applicability thereof. It should be noted that for clarity and ease of illustration these drawings are not necessarily made to scale.

[0006] Figure 1 is an example diagram illustrating the basic theory of near-eye light field displays in accordance with embodiments of the technology described herein.

[0007] Figure 2 is an example diagram illustrating when an object falls within the field of view of two lenses of an array of lenses in accordance with embodiments of the technology disclosed herein.

[0008] Figure 3 is a diagram illustrating a basic configuration of a head mount display in accordance with embodiments of the technology disclosed herein.

[0009] Figure 4 illustrates an example improved near-eye display system in accordance with embodiments of the technology disclosed herein.

[0010] Figure 5 is an example light field system in accordance with embodiments of the technology disclosed herein.

[0011] Figures 6A and 6B illustrate an example scan pattern and enhanced resolution in accordance with embodiments of the technology disclosed herein. [0012] Figure 7 is an diagram illustrating the enhanced resolution capable in accordance with embodiments of the technology disclosed herein.

[0013] Figure 8 is an example display configuration having one or more sensors disposed in between pixels of the display in accordance with embodiments of the technology disclosed herein.

[0014] Figure 9A illustrates an example array of lenses in accordance with embodiments of the technology disclosed herein.

[0015] Figure 9B illustrates another example array of lenses in accordance with embodiments of the technology disclosed herein.

[0016] Figure 10 illustrates an example light source array in accordance with embodiments of the technology disclosed herein.

[0017] Figure 11 illustrates a cross-sectional view of an example near-eye display system in accordance with embodiments of the technology disclosed herein.

[0018] Figure 12 illustrates another cross-sectional view of an example near-eye display system in accordance with embodiments of the technology disclosed herein.

[0019] Figure 13 is an example basic light field system in accordance with embodiments of the technology disclosed herein.

[0020] Figure 14 illustrates an example process flow in accordance with embodiments of the technology disclosed herein.

[0021] Figure 15 is another example light field system in accordance with embodiments of the technology disclosed herein. [0022] Figure 16 illustrates an example augmentation flow in accordance with embodiments of the technology disclosed herein.

[0023] Figure 17 illustrates an example computing module that may be used in implementing various features of embodiments of the disclosed technology.

[0024] The figures are not intended to be exhaustive or to limit the invention to the precise form disclosed. It should be understood that the invention can be practiced with modification and alteration, and that the disclosed technology be limited only by the claims and the equivalents thereof.

Detailed Description of the Embodiments

[0025] As discussed above, HMDs generally employ one or more displays placed in front of the human eye. 2D images are shown on the displays, and the eye focuses on the display itself. In order to provide a clear, focused image, optics placed between the eye and the display make the display appear farther away than the display may actually be in reality. In this way, the eye is capable of focusing beyond the space occupied by the display.

[0026] HMDs are generally either too large, have limited resolution, or a combination of both. This is due to the distance between pixels in the display, or pixel pitch. When there is sufficient distance between the display and the eye, pixel pitch does not impact resolution to a great extent, as the space between pixels is not as noticeable. However, HMDs place displays near the eye, making pixel pitch an important limiting factor related to resolution. In order to increase resolution, larger displays are necessary to increase the number of pixels in the display. Larger displays require larger optics to create the illusion of space between the eye and the display.

[0027] Traditionally, the display in a HMD is a 2D surface projecting a 2D image to each eye. Some HMDs utilize waveguides in an attempt to simulate 3D images. Waveguides, however, are complex, requiring precise design and manufacture to avoid errors in beam angle and beam divergence.

[0028] One solution that provides true three-dimensional images is the use of near-eye light field displays. Similar to light field cameras, a near-eye light field display creates a representation of light as it crosses a plane that provides information not only relating to the intensity of the light, but also to the direction of the light rays. Traditional 2D displays only provide information regarding the intensity of light on a plane. Waveguides with diffractive optical elements may be used to synthesize light rays on a plane, but this approach is complex, requiring precise design and manufacture to avoid errors in light ray angle and divergence.

[0029] Embodiments of the technology disclosed herein are directed toward systems and methods for near-eye light field displays. More particularly, the various embodiments of the technology disclosed herein relate to near-eye light field displays providing enhanced resolution and color depth compared to conventional near-eye light field displays. As described in greater detail below, embodiments of the technology disclosed herein enable near-eye display systems with true light-field representations, providing true 3D imaging with greater resolution, and without the need for complex waveguides. By scanning a light source array, such as an LED or OLED display, while controlling the intensity of the light source away in synchronization with the scan pattern, the impact of pixel pitch is reduced, resulting in increased resolution without the need for larger displays or optics. In some embodiments, the intensity modulation of the pixels is achieved by turning the pixels on and off for a time duration that is dependent on the desired intensity. For example, if higher intensity is desired on a red pixel than on a blue pixel, the red pixel would be lit up longer than the blue pixel. In some embodiments, the intensity modulation of the pixels is achieved by adjusting the current or voltage to the light emitter in the pixel.

[0030] Moreover, the distance between the light source array and an array of lenses may be adjusted during the retention time of the human eye. In this way, the rays from one or more lenses in the array of lenses provide the depth cues for an image within the field of view of the HMD. In some embodiments, the Z-direction motion may be achieved by one or more vertical actuators, separate from the actuators utilized for lateral movement of the light source displays. In some embodiments, said vertical actuators may be used to move a lens. In various embodiments, one or more actuators may be utilized which are capable of both lateral (in-plane) and vertical (out-of-plane) movement of the light source displays, without the need for separate, particularized actuators for each type of movement. In this manner, true light-field display is possible, without the need for the use of light-field cameras or continually changing the focus of image capture cameras and piecing the images together into a representation. Non- limiting examples of such vertical actuators and dual-plane (in-plane & out-of-plane) actuators include actuators disclosed in co-pending U.S. Patent Application No. 15/089,276, filed April 1, 2016, the disclosure of which is herein incorporated by reference in its entirety.

[0031] By employing embodiments of the systems and methods described below, it is possible to reduce the size and/or enhance the resolution and color of traditional HMDs, or convert a traditional HMD into a light field display.

[0032] Figure 1 illustrates the basic operation of a near-eye light field display 100 in accordance with embodiments of the technology disclosed herein. As illustrated, the near-eye light field display 100 comprises a light source array 110 and an array of lenses 120. Non-limiting examples of a light source array 110 include an LED, OLED, LCD, plasma, laser, or other electronic visual display technology. The array of lenses 120 may comprise a plurality of lenses, each configured to provide a different perspective for objects within each lens's field of view. In some embodiments, array of lenses 120 may comprise a plurality of injection-molded lenses. By capturing multiple views of the same field of view, the direction of the rays of light as they impact the different sections of the lenses are captured, providing an indication of the location and depth of the object within the field of view, represented by the virtual image 130 illustrated in Figure 1. This virtual image 130 represents the actual position of an object in space, which is beyond the point in space where the light source array 110 is located. The array of lenses 120 enables the eye 140 to focus not on the point in space occupied by the light source 120, but instead to focus on the point in space represented by the virtual image 130. [0033] In other words, by utilizing a near-eye light field display, the eyes can "look through" the display and focus on a virtual image 130 beyond the light source (display) 110. Note, however, that divergence of the rays of light is provided by the distance between the light source array 110 and the array of lenses 120. If the spacing between the light source array 110 and the array of lenses 120 is decreased, the divergence of the rays of light will increase, and the virtual image 130 will appear to be closer. Conversely, if the spacing between the light source array 110 and the array of lenses 120 is increased, the divergence of the rays of light will decrease, and the virtual image 130 will appear to be further away. Accordingly, a light field display may be constructed from a traditional HMD by providing a means of adjusting the focus distance between the light source array 110 and the array of lenses 120 according to the desired apparent distance of the virtual image 130. This adjustment in focus distance changes the direction of the rays as required for a light field display. In various embodiments, the focus adjustment is done within the retention time of the eye while different portions of the light source array are lit up so that different portions of the virtual image 130 appear at different distances. The focus adjustment is done when the focus of the eye changes in some embodiments, as can happen when the user looks at objects that are closer or further away.

[0034] When the virtual image 130 is within the field of view (FOV) of multiple lenses within the array of lenses 120, the rays entering the eye 140 representing the virtual image 130 may come from more than one lens. Figure 2 illustrates such an example arrangement in accordance with embodiments of the technology disclosed herein. As an initial matter, it will be noted that, throughout the present disclosure, like-numbered elements as between the various figures may generally be substantially similar in nature, and letters— e.g., a, b, c, etc.— may be used to denote various instances of these elements. Any exceptions to this generality will either be explained herein, and/or will be apparent to one of ordinary skill in the art upon studying the present disclosure.

[0035] As illustrated in Figure 2, the virtual image 130 rests within the FOV of two lenses of the array of lenses 120. Although described with respect to an object falling within the FOV of only two lenses of the array of lenses 120, a person of ordinary skill would appreciate that a virtual image 130 may fall within the FOV of more than two lenses of the array of lenses 120 in other embodiments. The light source display 110 includes two point sources 240a, 240b such that the light source display 110 is capable of properly displaying the virtual image 130 within the FOV of each lens, respectively. Proper alignment of the point source 240a, 240b is necessary to ensure that the image created by each lens, respectively, overlap, enabling the human eye to properly recreate the virtual image. Accordingly, both the focus and lateral position of the light source array 110 are equally important to ensure a proper, true light field display. This is true not only when an object is within the FOV of multiple lenses of the array of lenses 120, but also when the eye 140 moves.

[0036] Figure 3 shows a basic diagram of an example HMD 300 in accordance with embodiments of the technology disclosed herein. Figure 3 is a top view of the example HMD 300, meaning that the view is looking down on the top of a human head. The basic diagram is not intended to be exclusive of all components of a near-eye HMD in accordance with the present disclosure, and actual implementations may have different configurations and form factors. A person of ordinary skill would appreciate that the diagram is intended merely to describe the basic components of the HMD 300, and that other embodiments are applicable.

[0037] As illustrated in Figure 3, the HMD 300 is configured to sit in front of a user's eyes 140, similar to a pair of glasses. The HMD 300 comprises two imaging display systems 320a, 320b positioned in front of the user's eyes 140. The imaging display systems 320a, 320b includes the components for generating images, such as the light source array 110, array of lenses 120, and other components discussed above with respect to Figures 1 and 2. As illustrated, the imaging display systems 320a, 320b may be curved, but in some embodiments the imaging display systems 320a, 320b may be flat or a combination of curved and flat portions. For example, in some embodiments, the array of lenses 120 may be curved as if on the surface of a sphere such that the optical axis of each lens passes approximately through the center of rotation of the eye. In some embodiments, the curvature may be larger such that the optical axis of each lens passes approximately through the pupil of the eye when facing forward. In some embodiments, the curvature is such that the optical axis of each lens passes somewhere between the center of rotation of the eye and the pupil of the eye when facing forward. In some embodiments, the imaging display systems 320a, 320b may be opaque. The components of the imaging display systems 320a, 320b will be discussed in greater detail with respect to Figure 5. [0038] Cameras 310a, 310b may be disposed on the HMD 300 in various embodiments. The cameras 310a, 310b are configured to capture the user's FOV. Although shown as being disposed such that the cameras 310a, 310b are positioned on the side of the user's head, other embodiments may have cameras disposed elsewhere on the basic near-eye display 300. In some embodiments, the cameras 310a, 310b may be disposed on the top and/or the bottom of the imaging display systems 320a, 320b, respectively. In some embodiments, the HMD 300 may include multiple cameras per imaging display system 320a, 320b, respectively. Various different types of image sensors may comprise the cameras 310a, 310b. Non-limiting examples of image sensors that may be cameras 310a, 310b include: video cameras; light-field cameras; infrared (IR) cameras; low-light designed cameras; wide dynamic range cameras, high speed cameras, or thermal imaging sensors; among others. In various embodiments, the basic near-eye display 300 may include a combination of the above identified image sensors to provide a variety of imaging data to the subject, whether all at once or in different operational modes.

[0039] The cameras 310a, 310b and the imaging display systems 320a, 320b may be combined within a housing 330. The housing 330 enables the imaging display systems 320a, 320b to be positioned in front of the user's eyes 140. In various embodiments, the housing 330 may be configured as a pair of eyeglasses, with the imaging display systems 320a, 320b positioned with the glass lenses are generally positioned. In some embodiments, the housing 330 may be configured to wrap around the eyes 140 to prevent any outside light from entering the HMD 300. A variety of components may be included in the housing 330 to maintain the positioning of the HMD 300 on the user's head. Various embodiments may include nasal supports to allow the HMD 300 to rest on the user's nose (not pictured). Various embodiments may include inter pupillary distance (IPD) adjustment so that the distance between one display system 320a and the second display system 320b may be adjusted to substantially match the distance between the user's eyes. In some embodiments, the housing 330 may include ear supports to rest on the user's ears (not pictured). The housing 330 may wrap around the user's head, similar to swimming or welding goggles. The housing 330 may include a webbing structure to support the HMD 300 by resting across the skull of the subject, similar to the supporting webbing structure of hard hats. The supports of the housing 330 may include an adjustable strap to allow the HMD 300 to be modified to fit correctly on a user's head.

[0040] Figure 4 is a block diagram illustrating the components included within an example near-eye light field system 400 in accordance with embodiments of the technology of the present disclosure. The near-eye light field system 400 may be implemented in an HMD, such as the HMD 300 discussed with respect to Figure 3, to provide a true 3D representation of a scene in the user's FOV, mimicking transparent eyeglasses. Although discussed with respect to this example embodiments, after reading the description herein it will be apparent to one of ordinary skill in the art that the disclosed technology can be implemented in any of a number of different HMD applications. [0041] The example near-eye light field system 400 of Figure 4 includes a processor unit 420, one or more cameras 410, gyros/accelerometers 430, actuator control components 450, and imaging display systems 470 having one or more source displays 460 and actuators 440. The one or more cameras 410 may be similar to the cameras 310a, 310b discussed with respect to Figure 3. As discussed above, the one or more cameras 410 may be configured to capture objects within the FOV of the user and, in combination with the imaging display systems 470, present the scene within the user's FOV to the user's eyes, as if nothing was blocking the user's view. In some embodiments, the one or more cameras 410 may be light field cameras, which are designed to capture a light field representation of the field of view of the camera (i.e., the light of the images are broken up by an array of lenses disposed in front of an image sensor to capture both intensity and direction of the light). Other embodiments may utilize other image sensors as the one or more cameras 410, such as traditional video cameras. In such embodiments, the one or more traditional cameras would capture a series of pictures at different focal depths.

[0042] The images from the one or more cameras 410 may be fed into processor unit 420 to compute a true light field representation of the actual image at different depths based on the captured images. For example, where a traditional camera is used, the images from the one or more cameras 610 may comprise a series of pictures at different focal depths. To create the three dimensional actual image, the different captured images are processed to provide depth to the actual image. The computed light field is used to compute when the light source array (such as light source array 110 discussed with respect to Figure 1) are turned on to generate the correct light field on the surface of the array of lenses during scanning of the light source array by one or more actuators.

[0043] In some embodiments, the near-eye light field system 400 may include one or more gyroscopes or accelerometers 430, providing information representative of the particular position of the user's head. Furthermore, the images captured by the one or more cameras 410 may be processed to determine the motion of the user's head, as well. This information may be fed into the processor unit 420 to utilize in computing the light field to account for changes in the position of the user's head.

[0044] The near-eye light field system 400 may include one or more imaging display systems 470. The imaging display system 470 may be similar to the imaging display systems 320a, 320b discussed with respect to Figure 3. The imaging display system 470 of Figure 4 may include one or more source displays 460 and one or more actuators 440, as well as an array of lenses (not pictured). The processor unit 420 may be communicatively coupled to the one or more source displays 460, for example, through a driver associated with each source display 460. In this way, the processor unit 420 may control illumination of the lighting elements of the one or more source displays 460 to generate the correct light field representation on the array of lenses. In various embodiments, the processor unit 420 may also be communicatively coupled to the actuator control components 450, which control the actions of the one or more actuators 440. In this way, the movement of the one or more actuators 440 may be synchronized with the illumination of the one or more source displays 460. [0045] Figure 5 illustrates an example imaging display system 500 in accordance with embodiments of the technology disclosed herein. The example imaging display system 500 may be similar to the imaging display systems discussed with respect to Figures 3 and 4. The example imaging display system 500 may be implemented in a plurality of different HMD solutions, independent of the form factor of the HMD. For ease of discussion, the imaging display system 500 is shown for a single source display and lens. One of ordinary skill will appreciate that the discussion is applicable to the one or more source displays of the light source array and each lens of the array of lenses discussed with respect to Figures 1 and 2. The single source display/lens arrangement shown in Figure 5 illustrates the basic structure of the example imaging display system 500. Although discussed as such, nothing in this description should be interpreted to limit the scope of the present disclosure to systems with a single light source display and light field lens.

[0046] As illustrated, the example imaging display system 500 includes a source display 510, actuator 520, light field lens 560, processor unit 540, and actuator control components 530. Non-limiting examples of a source display 510 include an LED, OLED, LCD, plasma, or other electronic visual display technologies. The processor unit 540 may be connected to the source display 510 to control the illumination of the pixels of the source display 510, similar to the processor unit 420 discussed with respect to Figure 4. In some embodiments the processor unit 540 may include one or more of a microprocessor, memory, a field programmable gate array (FPGA), and/or display and drive electronics. A light field lens 560 is disposed between the source display 510 and the use's eye (not pictured). In some embodiments, the light field lens 560 is composed of multiple lenses arranged along the optical axis in order to improve the optical performance as compared with a single lens.

[0047] As discussed above, embodiments of the technology disclosed herein enable enhanced resolution without the need for larger displays. This helps to reduce the overall cost, size, and weight of HMDs and near-eye displays. As illustrated in Figure 5, the source display 510 may be disposed on one or more actuators 520. In various embodiments, the one or more actuators 520 may include one or more of: voice coil motors ("VCMs"); shape memory alloy ("SMA") actuators; piezoelectric actuators; MEMS actuators; a combination thereof; among others. In some examples, the one or more actuators 520 may comprise a MEMS actuator similar to the MEMS actuator disclosed in U.S. Patent Application No. 14/630,437, filed February 24, 2015, the disclosure of which is hereby incorporated herein by reference in its entirety. Non- limiting examples of material for connecting the source display 510 to the one or more actuators include: epoxy; solder; metal pastes; wire bonding; among others. To control the actuators 520, the imaging display system 500 may include actuator control components 530, including electronics for controlling the actuators and sensors identifying the position of the actuators 520. The actuator control components 530 are similar to the actuator control components 450 discussed with respect to Figure 4. In various embodiments, the actuator control components 530 and source display 510 may be synchronized with the movement of the one or more actuators 520. [0048] To enhance the resolution of the source display 510, scanning of the source display 510 through the use of the one or more actuators 520 enhances spatial resolution of images, i.e., how closely lines can be resolved in an image. In terms of pixels, the greater the number of pixels per inch ("ppi"), the clearer the image that may be resolved. Figure 6A illustrates a source display comprising four different colored pixels 610 (red, blue, green, and yellow), and the motion of the source display by one or more actuators, in accordance with embodiments of the technology of the present disclosure. As the source display is scanned by the one or more actuators, the pixels of illuminated in a synchronized pattern. In the illustrated example, the source display is moved in a raster scan 620. The type of scan pattern utilized in other embodiments may be taken into account when computing the synchronized pattern. In general, the scan pattern can be any desired Lissajous or similar figure obtained by having a different frequency of repetition (not necessarily sinusoidal) for the scan in the x axis and the scan in the orthogonal y axis. In other words, the scan pattern discussed with respect to Figure 6A concerns lateral (in-plane) motion.

[0049] As illustrated in Figure 6A, each pixel is translated to be illuminated near each black dot in the pattern 620, essentially turning each individual pixel into a 4x4 mini-display. Other embodiments may employ other scan patterns 620, including but not limited to a 3x3 pattern or a 2x2 pattern. By scanning every pixel in accordance with the scan pattern 620, each mini 4x4 display overlaps the others, creating a virtual display having full color superposition (i.e., all colors are represented at each pixel position). This is illustrated in Figure 6B. Every pixel position contains all colors contained in the display (e.g. red, green, yellow and blue) and thus can accurately represent any color. Moreover, the number of pixels per inch of each color is increased by a factor of 16, resulting in higher resolution without the need to utilize a larger display having a greater number of pixels per inch. If the display originally has a VGA resolution with 640 by 480 pixels, the scanning converts it into a display with 2,560 by 1,920 pixels, two times better than 1080p resolution. The overall increase in resolution is proportional to the number of scan points included within the scan pattern 620.

[0050] Figure 7 illustrates the benefits of scanning a light source array in accordance with the example scan pattern of Figures 6A and 6B. The Y-axis indicates the intensity of light from a light source, such as a light source array, while the X-axis indicates the angle of the light. Angle is shown because the human eye detects intensity as a function of angle. Also, lateral position of the light source array is converted to angle by the light field lens. With traditional displays, such as LED displays, as the light is modulated, the intensity of the light changes as a function of angle in discrete steps, as illustrated by blocks 710 of Figure 7. Each step represented by blocks 710 represents one pixel in the fixed display. The figure shows the case for a display with 100% fill factor where there is no space between pixels. In other embodiments, the display may have a fill factor less than 100%, leaving a gap between pixels with zero intensity. In other embodiments, when taking color into account, many of the pixels will be black when reproducing an image of a single color and gaps between pixels with zero intensity will be even larger. The intensity as a function of angle with a fixed display is pixelated, meaning there are discrete steps in intensity in the transition from one pixel to the next that are of sufficiently low angular resolution as to be perceivable by the eye. By moving the display, however, the convolution of the pixel pitch and the change in angle of the modulated light produces a smoother curve 720, indicating greater resolution. This results in enhanced resolution, as can be appreciated when compared to the desired resolution indicated by 730. Some embodiments may result in near perfect resolution. In various embodiments, the intensity modulation for each pixel may itself be digitized rather than continuous, as it may be updated once every frame, but there is still significant improvement in resolution compared with the fixed display.

[0051] Another benefit of scanning the display is the ability to include different sensors within the display without sacrificing resolution. Figure 8 illustrates an example display configuration 800 in accordance with embodiments of the technology disclosed herein. As illustrated, the display configuration 800 includes pixels in three colors (blue, red, and green). Dispersed in between the colored pixels are sensors 810. Each sensor 810 may be configured to scan a small portion of the eye as the actuator moves the display 800. The scanning by each sensor 810 may follow the same scan pattern as discussed above with respect to Figures 6A and 6B. The sensors 810 may pick up differences in the light reflected by different portions of the eye, distinguishing between the iris, pupil, and the whites of the eyes. In some embodiments, the sensors 810 may be used to determine the light reflected from the retina. In some embodiments, the light reflected from the retina may be used to image the retina and used for identification of the user, medical diagnosis, or any other application that benefits from imaging of the retina. In some embodiments, the light reflected from the retina may be used to determine the focus of the eye. For example, when the eye is focused at the current position of the display, the light from the display forms a small point on the retina. The reflected light similarly forms a small dot on the display surface where the sensors are located. In one embodiment, the focus of the eye is determined by measuring the size and/or position of the reflected spot from the retina on the display surface. In some embodiments, the sensors 810 may distinguish between different parts of the eye based on the light reflected or refracted off the eye. As the sensors are disposed in the space in between pixels, the scanning of the display in accordance with the scan pattern allows for the same proportional increase in resolution while still including sensors 810 in the display configuration 800. In one embodiment, the sensors 810 are incorporated into the same process that is used to fabricate the RGB display pixels. For example, if the display elements are light emitting diodes (LED), the sensors may also be LED but reverse biased to sense light rather than emit it.

[0052] In various embodiments, the light field lenses are designed to capture as much light as possible, while maintaining a resolution better than a human eye. In some embodiments, each light field lens may be designed with an aperture between 5 and 10 mm. Light field lenses with various aperture sizes may be combined into a single lens array in some embodiments. The focal length of each light field lens may be between 7 and 20 mm. This focal length is for the light field lens itself, and does not take into account chromatic aberration within each lens. Chromatic aberration results in light of different colors having different focal lengths. To account for chromatic aberration in each light field lens in various embodiments, each colored pixel may be turned on during scanning at different focal positions, thereby ensuring that the different colored light impacts the eye at the same focal point. In other embodiments, standard techniques for minimizing chromatic aberration may be used, such as but not limited to doublets and diffraction gratings. Where LEDs are another Lambertian emitter (distributed source) is utilized, a microlens may be disposed on top of the light source, to account for the distributed nature of the light.

[0053] As discussed above, in some embodiments the light field lenses may be incorporated into an array of lenses that is disposed between the light source displays and the eye. Figure 9A illustrates an example array of lenses in accordance with embodiments of the technology disclosed herein. In the example array of lenses, each lens 901 comprises a plano-convex lens. In various embodiments, other lens shapes may be used. In some embodiments, three aspheric lenses may be used, where the first lens is positive power, the second lens is negative power, and the last lens is low power. In some embodiments, three aspheric lenses may be used, where the first lens is negative power, the second lens is positive power, and the last lens is low power. In some embodiments, five aspheric lenses may be used, where the first lens is positive power, the second lens is negative power, the third lens is positive power, and the fourth and fifth lenses are low power. Each lens 901 may be configured to capture light from a single light display in some embodiments, or multiple lenses 901 may be configured to capture light from one or more light displays in embodiments where an array of light source displays is utilized. Each lens 901 may be made of any transparent material, including but not limited to plastic or glass. In some embodiments, lenses are plastic injection molded. To reduce scattering of the light rays at the transitions between lenses 901, an opaque mask 902 may be applied on the planar side at each transition point each lens 901. The opaque mask 902 may be made of an opaque material, including but not limited to soma. The opaque mask 902 is designed to eliminate light rays from scattering at the edges of the lens 901 in the transition points, eliminating the effects of scattered light on the resolution and the generated light field representation.

[0054] The array of lenses illustrated in Figure 9A may be utilized with a planar display. Where a curved display is utilized, a modified array of lenses may be configured to more accurately capture the light from the displays. Figure 9B illustrates an example curved array of lenses in accordance with embodiments of the technology disclosed herein. As illustrated in Figure 9B, the array of lenses comprises a plurality of lenses 901 configured to mate with each other at seams 903 when folded into a curved shape. The seams 903 may comprise an opaque mask, similar to the opaque mask 902 discussed with respect to Figure 9A, doubling as both a shield to eliminate scattering of light at the transition between the lenses and a hinge for the lenses to be folded into the proper shape for each embodiment. In some embodiments, the seams 903 may be formed by a thinned portion of the same material comprising the lenses 901. The number of sides of each lens 901 may vary depending on the number of lenses to be included in the design of the array of lenses. In some embodiments, each lens may have four or more sides. In various embodiments, each lens 901 will have the same number of sides as the other lenses in the array of lenses. In other embodiments, a first set of lenses will have a first number of sides, and a second set of lenses will have a second number of sides, similar to the example illustrated in Figure 9B. In other embodiments, the lens array may be molded directly into a curved shape without the need for folding.

[0055] As discussed above, the curved array of lenses discussed with respect to Figure 9B is suitable for use with a curved array of light source displays. Figure 10 illustrates an example array of light source displays 1000 in accordance with embodiments of the technology disclosed herein. As illustrated in Figure 10, the curved array of light source displays 1000 may have a similar shape as the array of lenses discussed with respect to Figure 9B. The curved array of light source displays 1000 includes a plurality of light source displays 1001. In various embodiments, each light source display 1001 may be disposed on a rigid circuit board 1002. In some embodiments, more than one light source display 1001 may be disposed on each rigid circuit board 1002. Each rigid circuit board 1002 may have a shape similar to each lens in the associated array of lenses. Each light source display 1001 may be mounted on top of an actuator, the actuator being disposed on the rigid circuit board 1002. Each rigid circuit board 1002 may be connected together with a flexible circuit 1003, enabling the rigid circuit boards 1002 to be shaped into the curve shape. A connector 104 connects the light source displays 1001 and actuators (when present) to the rest of the system.

[0056] Figure 11 illustrates a cross-sectional view of an example near-eye display system 1100 in accordance with embodiments of the technology of the present disclosure. The example near-eye display system 1100 includes an array of lenses 1101 (similar to the array of lenses discussed with respect to Figure 9B) disposed opposite an array of displays 1102 (similar to the array of light source displays 1000 discussed with respect to Figure 10). In some embodiments, supports 1103 may be connected to the transition points of the array of lenses 1101 and the flexible circuit of the array of displays 1102. The supports 1103 may maintain the distance between the two arrays remains fixed, ensuring that a proper light field representation is generated.

[0057] The outside edges of the array of lenses 1101 may be connected to an exterior platform 1104, while the array of displays 1102 may be connected to an interior platform 1105 in various embodiments. In this manner, the near-eye display system 1100 may be modularly constructed, with the array of lenses portion may be constructed separately from the array of displays portion, and combined after fabrication. The exterior platform 1104 and/or the interior platform 1105 may further be configured to house additional components of the near-eye display system 1100, including but not limited to: control computer or processing components; cameras; memories; or motion sensors, such as gyroscopes, accelerometers, or other motion sensors; or a combination thereof. The exterior platform 1104 and interior platform 1105 may be created through injection molding or press molding, and may comprise of many different materials, including but not limited to plastic.

[0058] In some embodiments, the light source displays of the array of displays 1102 may be disposed on an actuator configured to provide both in-plane scanning, as well as out-of-plane motion. In-plane motion refers to motion within the same horizontal plane as the actuator, while out-of-plane motion refers to motion in the vertical direction above or below the actuator. In this way, a light field display may be generated through scanning alone of the light source displays of the array of displays 1102. In some embodiments, only a light source display located in the center of the array of displays 1102 may be disposed on an actuator capable of both in-plane and out- of-plane scanning. In other embodiments, the light source displays surrounding and abutting the central light source display of the array of displays 1102 may be disposed on actuators capable of in-plane and out-of-plane motion, while all the exterior light source displays are disposed on stationary or in-plane only actuators. In some embodiments, the out-of-plane motion may be determined based on one or more position sensors disposed on the array of displays 1102, similar to the sensors discussed with respect to Figure 8, to determine the proper focus for the light field representation based on where the eye is focusing, if the user is squinting, or a combination thereof.

[0059] In some embodiments, the out-of-plane motion may be provided by moving the array of displays relative to the array of lenses, or vice versa. Figure 12 illustrates another cross-sectional view of an example near-eye display system 1200 in accordance with embodiments of the technology disclosed herein. The near-eye display system 1200 is similar to the example system 1100 discussed with respect to Figure 11, with an out-of-plane motion device 1206 included. The out-of-plane motion device 1206 may comprise an actuator, voice coil motor (VCM), or other motion devices in various embodiments. The voice coil motor is composed of a coil of wire and magnets. When electrical current is made to flow through the coil, the induced magnetic field interacts with the magnets, thus generating a controlled force. In the illustrated example, a VCM 1206 is disposed on each side of the array of displays 1202, enabling the array of displays 1202 or the array of lenses 1201 to be moved in the vertical direction (as illustrated by the arrows including in Figure 12), for focusing. Rollers 1207 enable the interior housing 1205 to move relative to the exterior housing 1204, or the opposite in some embodiments. In one embodiment, any lateral motion of the lens with respect to the display during focusing is compensated by in-plane motion of the actuator under the display. In one embodiment, any lateral motion of the lens with respect to the display during focusing is compensated by the electronic shifting of the image on the display.

[0060] Moreover, the near-eye display system 1200 further illustrates an array of displays 1202 where only the central light source displays of the array of displays 1202 are disposed on actuators. The outside light source displays of the array of displays are disposed directly on the rigid circuit board in the illustrated embodiment.

[0061] Figure 13 is an example basic light field system 1300 in accordance with embodiments of the technology disclosed herein. As illustrated in Figure 13, the basic light field system 1300 includes at least two cameras, a left camera 1302 and a right camera 1304. Each camera is configured to provide an image stream to a particular eye, left and right. In many embodiments, the left camera 1302 and right camera 1304 may include a motorized focus such that the focus of each camera 1302, 1304 can be changed dynamically. The focus of each camera 1302, 1304 may be controlled by a camera focus control 1306. In some embodiments, each camera 1302, 1304 may have an independent camera focus control 1306, respectively. [0062] The focus of each camera 1302, 1304 may be controlled based on the focus of the user's eyes. The basic light field system 1300 may include eye focus sensors 1308, 1310 disposed within a display in front of the left eye and right eye, respectively. Each eye focus sensor 1308, 1310 may include one or more focus sensors in various embodiments. In some embodiments, the eye focus sensors 1308, 1310 may be disposed in the spaces between the pixels of a left display 1312 and a right display 1314. The eye focus sensors 1308, 1310 may be used to determine where a user's eyes are focused. The information from the eye focus sensors 1308, 1310 may be fed into a focus correction module 1316. The focus correction module 1316 may determine the correct focus based on the point where the user's eyes are focused, and provide this information to a display focus control 1318. The display focus control 1318 may provide this information to the camera focus control 1306. The camera focus control 1306 may utilize the focus information from the display focus control 1318 to set the focus of each camera 1302, 1304. The vision of a user with eye focus problems (myopia or hyperopia, nearsighted or farsighted) can be corrected by setting the focus of the cameras to a different depth than the focus of the display. In some embodiments, the cameras 1302, 1304 may be one or more of a light field camera, a standard camera, an infrared camera, or some other image sensor, or a combination thereof. For example, in some embodiments the cameras 1302, 1304 may comprise a standard camera and an infrared camera, enabling the basic light field system 1300 to provide both a normal view and an infrared view to the user. [0063] The display focus control 1318 may also utilize the desired focus from the focus correction module 1316 to set the focus of the displays 1312, 1314, to the focus of each eye.

[0064] Once the cameras 1302, 1304 are set to the desired focus, the cameras 1302, 1304 may capture the scene within the field of view of each camera 1302, 1304. The images from each camera 1302, 1304 may be processed by a processor unit 1320, 1322. As illustrated, each camera 1302, 1304 has its own processor unit 1320, 1322, respectively. In some embodiments, a single processor unit may be employed for both cameras 1302, 1304. The processor unit 1320, 1322 may process the images from each camera 1302, 1304 in a similar fashion as described above with respect to Figure 4. For example, the processor unit 1320, 1322 may compute a light field for use in computing a sequence to turn on different light sources on each display 1312, 1314 to provide a greater resolution during scanning of the displays 1312, 1314. The images are displayed to each eye via the displays 1312, 1314, respectively, in accordance with the light field sequence computed by the processor unit 1320, 1322. In some embodiments, this pseudo-light field display may be provided without scanning of the displays 1312, 1314. In such embodiments, although the resolution of the image may not be enhanced, a light field may still be generated and presented to the eyes.

[0065] Although illustrated as separate components, aspects of the basic light field system 1300 may be implemented as in a single component. For example, the focus correction 1316, the display focus control 1318, and the camera focus control 1306 may be implemented in software and executed by a processor, such as processor unit 1320, 1322.

[0066] Figure 14 illustrates an example process flow 1400 in accordance with embodiments of the technology disclosed herein. The process flow 1400 is applicable for embodiments similar to the basic light field system 1300 discussed with respect to Figure 13. At 1410, one or more focus sensors measure the eye focus of the user. The measurement may be performed by one or more sensors disposed on a display placed in front of each eye in some embodiments. The eye focus sensors may be disposed in the space between pixels of each display in various embodiments, similar to the configuration discussed above with respect to Figure 8.

[0067] At 1420, a desired focus is determined. The desired focus is determined based on the measured eye focus from 1410. The desired focus may be different from the eye focus if the user has focus problems. For example, if the user has myopia (nearsightedness), the desired focus is further away than the measured eye focus. The desired focus may also be determined from the position of the eye, such as close if looking down, or the positon of the eye with respect to the image, such as the same focus as a certain object in the scene, or some of other measurement of the eye. Based on the desired focus, the camera focus may be set to the desired focus at 1430. In various embodiments, the camera focus may be set equal to the desired focus. In other embodiments, the camera focus may be set to a focus close to, but not equal to, the desired focus. In such embodiments, the camera focus may be set as close as possible based on the type of camera employed in the embodiment. [0068] At 1440, the cameras capture images of objects within the field of view of the cameras. In some embodiments, the field of view of the cameras may be larger than the displayed field of view to enable some ability to quickly update the display when there is rapid head movement, without the need for capturing a new image.

[0069] At 1450, each display is set to the eye focus. In some embodiments, the eye focus is the same as the desired focus. In other embodiments, the desired focus is derived from the eye focus identified at 1410. In some embodiments, the displays may be set to the eye focus before setting the camera focus at 1430, after 1430 but before the camera captures images at 1440, or simultaneous to the actions at 1430 and/or 1440.

[0070] At 1460, the images are displayed to each eye. The images are displayed to each eye via the respective display. In some embodiments, the images may be processed by a processor unit prior to being displayed, similar to the processing discussed above with respect to Figure 13. In some embodiments, the images may be combined with computer generated images to generate an augmented reality image.

[0071] Figure 15 is another example light field system 1500 in accordance with embodiments of the technology disclosed herein. As illustrated in Figure 15, the light field system 1400 contains similar components as the basic light field system 1300 discussed with respect to Figure 13. An inertial measurement unit (IMU) 1524 is included within the light field system 1500 of Figure 15. In various embodiments, the IMU 1524 may include one or more gyroscopes, accelerometers, or other motion or orientation sensors, or a combination thereof. The components comprising the IMU 1524 may track the motion of a user's head, and provide that information to the processor and memory 1520. In this way, the position of augmented objects or images may be adjusted based on the user's head movements. Augmentation is a way of enhancing the user's experience of the scene within the field of view by providing additional information on objects within the field of view, or even adding computer- generated objects to the field of view.

[0072] Figure 16 illustrates an example augmentation flow 1600 in accordance with embodiments of the technology disclosed herein. The augmentation flow 1600 is applicable for embodiments similar to the light field system 1500 discussed with respect to Figure 15. Steps 1610, 1620, 1630, and 1640 may be similar to 1410, 1420, 1430, and 1440 discussed above with respect to Figure 14. At 1650, objects may be added to the picture. In various embodiments, the added objects may be images from memory or computer-generated items that are located within the image, as if the object actually was present in the real-world field of view. In some embodiments, the added objects may be used to enable gamification of everyday life. At 1660, the pictures are enhanced. Enhancement could include, but is not limited to: zooming in on a particular object or area within the field of view (and ensuring high resolution of the particular object or area); adjusting colors within the field of view; adding images from another camera included in the system; or other enhancements.

[0073] Although included within the example process 1600, both 1650 and 1660 need not be performed every time. In some embodiments, only adding objects at 1650 will occur. In other embodiments, only enhancing of the images at 1660 will be performed. In various embodiments, both 1650 and 1660 will be performed. Setting the displays to the focus of the eyes at 1670 and displaying the pictures at 1680 may be similar to the setting 1450 and displaying 1460 actions discussed with respect to Figure 14.

[0074] As used herein, the term component might describe a given unit of functionality that can be performed in accordance with one or more embodiments of the technology disclosed herein. As used herein, a component might be implemented utilizing any form of hardware, software, or a combination thereof. For example, one or more processors, controllers, ASICs, PLAs, PALs, CPLDs, FPGAs, logical components, software routines or other mechanisms might be implemented to make up a component. In implementation, the various components described herein might be implemented as discrete components or the functions and features described can be shared in part or in total among one or more components. In other words, as would be apparent to one of ordinary skill in the art after reading this description, the various features and functionality described herein may be implemented in any given application and can be implemented in one or more separate or shared components in various combinations and permutations. Even though various features or elements of functionality may be individually described or claimed as separate components, one of ordinary skill in the art will understand that these features and functionality can be shared among one or more common software and hardware elements, and such description shall not require or imply that separate hardware or software components are used to implement such features or functionality. [0075] Where components of the technology are implemented in whole or in part using software, in one embodiment, these software elements can be implemented to operate with a computing or processing component capable of carrying out the functionality described with respect thereto. One such example computing component is shown in Figure 17. Various embodiments are described in terms of this example- computing component 1700. After reading this description, it will become apparent to a person skilled in the relevant art how to implement the technology using other computing components or architectures.

[0076] Referring now to Figure 17, computing component 1700 may represent, for example, computing or processing capabilities found within desktop, laptop and notebook computers; hand-held computing devices (PDA's, smart phones, cell phones, palmtops, etc.); mainframes, supercomputers, workstations or servers; or any other type of special-purpose or general-purpose computing devices as may be desirable or appropriate for a given application or environment. Computing component 1700 might also represent computing capabilities embedded within or otherwise available to a given device. For example, a computing component might be found in other electronic devices such as, for example, digital cameras, navigation systems, cellular telephones, portable computing devices, modems, routers, WAPs, terminals and other electronic devices that might include some form of processing capability.

[0077] Computing component 1700 might include, for example, one or more processors, controllers, control modules, or other processing devices, such as a processor 1704. Processor 1704 might be implemented using a general-purpose or special-purpose processing engine such as, for example, a microprocessor, controller, or other control logic. In the illustrated example, processor 1704 is connected to a bus 1702, although any communication medium can be used to facilitate interaction with other components of computing component 1700 or to communicate externally.

[0078] Computing component 1700 might also include one or more memory components, simply referred to herein as main memory 1708. For example, preferably random access memory (RAM) or other dynamic memory, might be used for storing information and instructions to be executed by processor 1704. Main memory 1708 might also be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 1704. Computing component 1700 might likewise include a read only memory ("ROM") or other static storage device coupled to bus 1702 for storing static information and instructions for processor 1704.

[0079] The computing component 1700 might also include one or more various forms of information storage mechanism 1710, which might include, for example, a media drive 1712 and a storage unit interface 1720. The media drive 1712 might include a drive or other mechanism to support fixed or removable storage media 1714. For example, a hard disk drive, a floppy disk drive, a magnetic tape drive, an optical disk drive, a CD or DVD drive (R or RW), or other removable or fixed media drive might be provided. Accordingly, storage media 1714 might include, for example, a hard disk, a floppy disk, magnetic tape, cartridge, optical disk, a CD or DVD, or other fixed or removable medium that is read by, written to or accessed by media drive 1712. As these examples illustrate, the storage media 1714 can include a computer usable storage medium having stored therein computer software or data.

[0080] In alternative embodiments, information storage mechanism 1710 might include other similar instrumentalities for allowing computer programs or other instructions or data to be loaded into computing component 1700. Such instrumentalities might include, for example, a fixed or removable storage unit 1722 and an interface 1720. Examples of such storage units 1722 and interfaces 1720 can include a program cartridge and cartridge interface, a removable memory (for example, a flash memory or other removable memory module) and memory slot, a PCMCIA slot and card, and other fixed or removable storage units 1722 and interfaces 1720 that allow software and data to be transferred from the storage unit 1722 to computing component 1700.

[0081] Computing component 1700 might also include a communications interface 1724. Communications interface 1724 might be used to allow software and data to be transferred between computing component 1700 and external devices. Examples of communications interface 1724 might include a modem or softmodem, a network interface (such as an Ethernet, network interface card, WiMedia, IEEE 802. XX or other interface), a communications port (such as for example, a USB port, IR port, RS232 port Bluetooth ® interface, or other port), or other communications interface. Software and data transferred via communications interface 1724 might typically be carried on signals, which can be electronic, electromagnetic (which includes optical) or other signals capable of being exchanged by a given communications interface 1724. These signals might be provided to communications interface 1724 via a channel 1728. This channel 1728 might carry signals and might be implemented using a wired or wireless communication medium. Some examples of a channel might include a phone line, a cellular link, an RF link, an optical link, a network interface, a local or wide area network, and other wired or wireless communications channels.

[0082] In this document, the terms "computer program medium" and "computer usable medium" are used to generally refer to media such as, for example, memory 1708, storage unit 1720, media 1714, and channel 1728. These and other various forms of computer program media or computer usable media may be involved in carrying one or more sequences of one or more instructions to a processing device for execution. Such instructions embodied on the medium, are generally referred to as "computer program code" or a "computer program product" (which may be grouped in the form of computer programs or other groupings). When executed, such instructions might enable the computing component 1700 to perform features or functions of the disclosed technology as discussed herein.

[0083] While various embodiments of the disclosed technology have been described above, it should be understood that they have been presented by way of example only, and not of limitation. Likewise, the various diagrams may depict an example architectural or other configuration for the disclosed technology, which is done to aid in understanding the features and functionality that can be included in the disclosed technology. The disclosed technology is not restricted to the illustrated example architectures or configurations, but the desired features can be implemented using a variety of alternative architectures and configurations. Indeed, it will be apparent to one of skill in the art how alternative functional, logical or physical partitioning and configurations can be implemented to implement the desired features of the technology disclosed herein. Also, a multitude of different constituent module names other than those depicted herein can be applied to the various partitions. Additionally, with regard to flow diagrams, operational descriptions and method claims, the order in which the steps are presented herein shall not mandate that various embodiments be implemented to perform the recited functionality in the same order unless the context dictates otherwise.

[0084] Although the disclosed technology is described above in terms of various exemplary embodiments and implementations, it should be understood that the various features, aspects and functionality described in one or more of the individual embodiments are not limited in their applicability to the particular embodiment with which they are described, but instead can be applied, alone or in various combinations, to one or more of the other embodiments of the disclosed technology, whether or not such embodiments are described and whether or not such features are presented as being a part of a described embodiment. Thus, the breadth and scope of the technology disclosed herein should not be limited by any of the above-described exemplary embodiments.

[0085] Terms and phrases used in this document, and variations thereof, unless otherwise expressly stated, should be construed as open ended as opposed to limiting. As examples of the foregoing: the term "including" should be read as meaning "including, without limitation" or the like; the term "example" is used to provide exemplary instances of the item in discussion, not an exhaustive or limiting list thereof; the terms "a" or "an" should be read as meaning "at least one," "one or more" or the like; and adjectives such as "conventional," "traditional," "normal," "standard," "known" and terms of similar meaning should not be construed as limiting the item described to a given time period or to an item available as of a given time, but instead should be read to encompass conventional, traditional, normal, or standard technologies that may be available or known now or at any time in the future. Likewise, where this document refers to technologies that would be apparent or known to one of ordinary skill in the art, such technologies encompass those apparent or known to the skilled artisan now or at any time in the future.

[0086] The presence of broadening words and phrases such as "one or more," "at least," "but not limited to" or other like phrases in some instances shall not be read to mean that the narrower case is intended or required in instances where such broadening phrases may be absent. The use of the term "component" does not imply that the elements or functionality described or claimed as part of the component are all configured in a common package. Indeed, any or all of the various elements of a component, whether control logic or other elements, can be combined in a single package or separately maintained and can further be distributed in multiple groupings or packages or across multiple locations.

[0087] Additionally, the various embodiments set forth herein are described in terms of exemplary block diagrams, flow charts and other illustrations. As will become apparent to one of ordinary skill in the art after reading this document, the illustrated embodiments and their various alternatives can be implemented without confinement to the illustrated examples. For example, block diagrams and their accompanying description should not be construed as mandating a particular architecture or configuration.