Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
MODIFYING AUDIO BASED ON SITUATIONAL AWARENESS NEEDS
Document Type and Number:
WIPO Patent Application WO/2019/183225
Kind Code:
A1
Abstract:
Headphone systems and methods are provided that include a sensor to detect a condition of the environment. A detection circuit is configured to receive the sensor signal and to determine the condition of the environment, and an audio signal is modified in response.

Inventors:
DALEY, Michael J. (25 Morningside Drive, Shrewsbury, Massachusetts, 01545, US)
Application Number:
US2019/023171
Publication Date:
September 26, 2019
Filing Date:
March 20, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
BOSE CORPORATION (The Mountain, MS 3B1Framingham, Massachusetts, 01701-9168, US)
International Classes:
H04R1/10; G10K11/178; H04S7/00; H04R5/033
Foreign References:
US20180014107A12018-01-11
US20080079571A12008-04-03
US20170301339A12017-10-19
Other References:
None
Attorney, Agent or Firm:
BRYAN, Timothy M. (The Mountain Road, MS3B1Framingham, Massachusetts, 01701-9168, US)
Download PDF:
Claims:
CLAIMS

1. A headphone system, comprising:

a structural component configured to be worn by a user;

an acoustic driver coupled to the structural component to render an audio signal;

a microphone coupled to the structural component to detect an acoustic signal indicative of a condition of the environment and to provide a microphone signal; and

a detection circuit configured to receive the microphone signal and to detect the condition of the environment based at least upon the microphone signal, and to modify the audio signal in response to detecting the condition of the environment.

2. The headphone system of claim 1 wherein the detection circuit is configured to analyze the microphone signal to detect at least one of an approaching vehicle, a proximity of a person or animal, a high wind condition, a siren, or an alarm.

3. The headphone system of claim 1 wherein the detection circuit is configured to modify the audio signal by at least one of reducing volume, compensating for wind noise, reducing acoustic noise reduction, disabling acoustic noise reduction, amplifying an external sound, adjusting an array parameter, or injecting an audible cue.

4. The headphone system of claim 1 wherein the detection circuit is further configured to provide a visual cue to a user in response to determining the aspect of the environment.

5. A headphone system, comprising:

an earpiece;

an acoustic driver coupled to the earpiece to render an audio signal;

a transceiver coupled to the earpiece to emit a probe signal and to detect a reflected signal indicative of an aspect of the environment; and

a detection circuit configured to receive the reflected signal and to determine the aspect of the environment based at least upon the reflected signal, and to modify the audio signal in response to determining the aspect of the environment.

6. The headphone system of claim 5 wherein the transceiver is configured to generate an acoustic signal as the probe signal and to detect an acoustic echo as the reflected signal.

7 The headphone system of claim 5 wherein the transceiver is configured to generate a radio frequency signal as the probe signal and to detect a radio frequency echo as the reflected signal.

8. The headphone system of claim 5 wherein the detection circuit is configured to analyze the reflected signal to determine at least one of an approaching vehicle, a proximity of a person or animal, a high wind condition a siren or an alarm.

9. The headphone system of claim 5 wherein the detection circuit is configured to modify the audio signal by at least one of reducing volume, compensating for wind noise, reducing acoustic noise reduction, disabling acoustic noise reduction, amplifying an external sound, adjusting an array parameter, or injecting an audible cue

10. The headphone system of claim 5 wherein the detection circuit is further configured to provide a visual cue to a user in response to determining the aspect of the environment.

11. A headphone system, comprising:

an acoustic driver to render an audio signal;

a sensor to detect a signal indicative of a condition of the environment and to provide a sensor signal; and

a detection circuit configured to receive the sensor signal and to deter ine the condition of the environment based at least upon the sensor signal, and to modify the audio signal in response to determining the condition of the environment

12. The headphone system of claim 11 wherein the detection circuit is configured to analyze the sensor signal to detect at least one of an approaching vehicle, a proximity of a person or animal, a high wind condition, a siren, or an alarm.

13. The headphone system of claim 11 wherein the detection circuit is configured to modify the audio signal by at least one of reducing volume, compensating for wind noise, reducing acoustic noise reduction, disabling acoustic noise reduction, amplifying an external sound, adjusting an array parameter, or injecting an audible cue.

14. The headphone system of claim 1 1 wherein the detection circuit is further configured to provide a visual cue to the user in response to detecting the condition of the environment.

15. A method of managing situational awareness of a headphone user, the method comprising:

receiving an audio signal to be converted to an acoustic signal;

receiving a sensor signal indicative of a condition of the environment;

analyzing the sensor signal to detect the condition of the environment;

modifying the audio signal in response to the condition of the environment; and rendering the acoustic signal, by a transducer, from the modified audio signal.

16. The method of claim 15 wherein analyzing the sensor signal to detect the condition of the environment includes detecting at least one of an approaching vehicle, a proximity of a person or animal, a high wind condition, a siren, or an alarm.

17. The method of claim 15 wherein modifying the audio signal includes modify the audio signal by at least one of reducing volume, compensating for wind noise, reducing acoustic noise reduction, disabling acoustic noise reduction, amplifying an external sound, adjusting an array parameter, or injecting an audible cue.

18. The method of claim 15 further comprising providing a visual cue to the user in response to detecting the condition of the environment.

19. The method of claim 15 wherein the sensor signal is one of an acoustic signal, an optical signal, a microphone signal, or a radio signal.

20. The method of claim 15 further comprising transmi tting a probe signal that is one of an acoustic signal, a radio signal, or an optical signal, and wherein the sensor signal is provided in response to the probe signal.

Description:
MODIFYING AUDIO BASED ON SITUATIONAL AWARENESS NEEDS

BACKGROUND

Headphone systems are used in numerous environments for various purposes, examples of which include use of headphones during sporting activities, such as bicycling or running, for various purposes, such as listening to audio content (e.g., music, talk), communications (e.g., telephone calls), and/or noise reduction. Use of headphones during certain activities may reduce the user’s awareness to his/her surroundings. There is a need, therefore, to provide the desired benefits of headphone use while increasing the user’s awareness of his/her

surroundings, especially when dangers may be present, or when there may be conditions of the environment for which the user should maintain awareness.

SUMMARY OF THE INVENTION

Aspects and examples are directed to headphone systems and methods that detect aspects, events, conditions, or situation of the surrounding environment and may modify audio playback characteristics to increase the user’s awareness of the environment and/or may provide informational content to a user, such as through an audio message or alert.

According to one aspect, a headphone system is provided that includes a structural component configured to be worn by a user, an acoustic driver coupled to the structural component to render an audio signal, a microphone coupled to the structural component to detect an acoustic signal indicative of a condition of the environment and to provide a microphone signal, and a detection circuit configured to receive the microphone signal and to detect the condition of the environment based at least upon the microphone signal, and to modify the audio signal in response to detecting the condition of the environment.

In some examples, the detection circuit is configured to analyze the microphone signal to detect at least one of an approaching vehicle, a proximity of a person or animal, a high wind condition, a siren, or an alarm.

In certain examples, the detection circuit is configured to modify the audio signal by at least one of reducing volume, compensating for wind noise, reducing acoustic noise reduction, disabling acoustic noise reduction, amplifying an external sound, adjusting an array parameter, or injecting audible cues. According to some examples, the detection circuit is further configured to provide a visual cue to a user in response to determining the aspect of the environment.

According to another aspect, a headphone system is provided that includes an earpiece, an acoustic driver coupled to the earpiece to render an audio signal, a transceiver coupled to the earpiece to emit a probe signal and to detect a reflected signal indicative of an aspect of the environment, and a detection circuit configured to receive the reflected signal and to determine the aspect of the environment based at least upon the reflected signal, and to modify the audio signal in response to determining the aspect of the environment.

In certain examples, the transceiver is configured to generate an acoustic signal as the probe signal and to detect an acoustic echo as the reflected signal.

In some examples, the transceiver is configured to generate a radio frequency signal as the probe signal and to detect a radio frequency echo as the reflected signal.

According to certain examples, the detection circuit is configured to analyze the reflected signal to determine at least one of an approaching vehicle, a proximity of a person or animal, a high wind condition, a siren, or an alarm.

In some examples, the detection circuit is configured to modify the audio signal by at least one of reducing volume, compensating for wind noise, reducing acoustic noise reduction, disabling acoustic noise reduction, amplifying an external sound, adjusting an array parameter, or injecting audible cues.

In certain examples, the detection circuit is further configured to provide a visual cue to a user in response to determining the aspect of the environment.

According to another aspect, a headphone system is provided that includes an acoustic driver to render an audio signal, a sensor to detect a signal indicative of a condition of the environment and to provide a sensor signal, and a detection circuit configured to receive the sensor signal and to determine the condition of the environment based at least upon the sensor signal, and to modify the audio signal in response to determining the condition of the environment.

In some examples, the detection circuit is configured to analyze the sensor signal to detect at least one of an approaching vehicle, a proximity of a person or animal, a high wind condition, a siren, or an alarm.

9 In certain examples, the detection circuit is configured to modify the audio signal by at least one of reducing volume, compensating for wind noise, reducing acoustic noise reduction, disabling acoustic noise reduction, amplifying an external sound, adjusting an array parameter, or injecting audible cues.

In some examples, the detection circuit is further configured to provide a visual cue to the user in response to detecting the condition of the environment.

According to another aspect, a method of managing situational awareness of a headphone user is provided that includes receiving an audio signal to be converted to an acoustic signal, receiving a sensor signal indicative of a condition of the environment, analyzing the sensor signal to detect the condition of the environment, modifying the audio signal in response to the condition of the environment, and rendering the acoustic signal, by a transducer, from the modified audio signal.

In some examples, analyzing the sensor signal to detect the condition of the environment includes detecting at least one of an approaching vehicle, a proximity of a person or animal, a high wind condition, a siren, or an alarm.

In certain examples, modifying the audio signal includes modify the audio signal by at least one of reducing volume, compensating for wind noise, reducing acoustic noise reduction, disabling acoustic noise reduction, amplifying an external sound, adjusting an array parameter, or injecting an audible cue.

Some examples also include providing a visual cue to the user in response to detecting the condition of the environment.

In various examples, the sensor signal may be one of an acoustic signal, an optical signal, a microphone signal, or a radio signal.

Certain examples include transmitting a probe signal that is one of an acoustic signal, a radio signal, or an optical signal, and wherein the sensor signal may be provided in response to the probe signal.

Still other aspects, examples, and advantages of these exemplary aspects and examples are discussed in detail below. Examples disclosed herein may be combined with other examples in any manner consistent with at least one of the principles disclosed herein, and references to “an example,”“some examples,”“an alternate example”“various examples,”“one example” or the like are not necessarily mutually exclusive and are intended to indicate that a particular feature, structure, or characteristic described may be included in at least one example. The appearances of such terms herein are not necessarily all referring to the same example.

BRIEF DESCRIPTION OF THE DRAWINGS

Various aspects of at least one example are discussed below with reference to the accompanying figures, which are not intended to be drawn to scale. The figures are included to provide illustration and a further understanding of the various aspects and examples, and are incorporated in and constitute a part of this specification, but are not intended as a definition of the limits of the invention. In the figures, identical or nearly identical components illustrated in various figures may be represented by a like numeral. For purposes of clarity, not every component may be labeled in every figure. In the figures;

FIG. 1 is a perspective view' of an example headphone set;

FIG. 2 is a left-side view' of an example headphone set;

FIG. 3 is a schematic diagram of an example headphone set;

FIG. 4 is a flow' chart of an example method that may be carried out by a headphone set; and

FIG. 5 is a schematic diagram of a signal processing method that may be carried out by a headphone set.

DETAILED DESCRIPTION

Advanced headphone systems may immerse a user in an audio experience, and thereby reduce the user’s awareness of their surroundings. Accordingly, aspects and examples of systems and methods herein act to detect various conditions of the user’s environment and enhance a user’s awareness of his/her environment.

For example, systems and methods described herein may detect a potential hazard (e.g., approaching vehicles or people) or other environmental conditions of which a user should be aware (e.g., sirens, emergency services) while using headphones (e.g., while bicycling or running), and may reduce audio playback volume, reduce noise cancelling functionality, or provide an audible or visible prompt, in response to detecting the hazard or condition.

Throughout this disclosure the terms“headset,”“headphone,” and“headphone set” are used interchangeably, and no distinction is meant to be made by the use of one term over another unless the context clearly indicates otherwise. Additionally, aspects and examples in accord with those disclosed herein may be applied to earphone form factors (e.g., in-ear transducers, earbuds) and/or off-ear acoustic devices (e.g., devices that are designed to not contact a wearer’s ears, but are worn in the vicinity of the wearer’s ears, on the head or body, e.g., shoulders) and such are also contemplated by the terms“headset,”“headphone,” and “headphone set.” Accordingly, any on-ear, in-ear, over-ear, or off-ear form-factors of personal acoustic devices are intended to be included by the terms “headset,” “headphone,” and “headphone set.” The term“earpiece” is intended to include any portion of such form factors in proximity to at least one of a user’s ears.

FIG. 1 illustrates one example of a headphone set. The headphones 100 include two earpieces, e.g., a right earcup 102 and a left earcup 104, coupled to a right yoke assembly 108 and a left yoke assembly 110, respectively, and intercoupled by a headband 106. The right earcup 102 and left earcup 104 include a right circumaural cushion 112 and a left circumaural cushion 114, respectively. Visible on the left earcup 104 is a left interior surface 116. While the example headphones 100 are shown with earpieces having circumaural cushions to fit around or over the ear of a user, in other examples cushions may sit on the ear, or may include earbud portions that protrude into a portion of a user’s ear canal, or may include alternate physical arrangements, as discussed above. As discussed in more detail below, either of the earcups 102, 104 may include one or more sensors, such as microphones, accelerometers, radio receivers, etc. Although the example headphones 100 illustrated in FIG. 1 include two earpieces, some examples may include only a single earpiece for use on one side of the head only. Additionally, although the example headphones 100 include a headband 106, other examples may include different support structures to maintain one or more earpieces (e.g., earcups, in-ear structures, neckband, etc.) in proximity to a user’s ear, e.g., an earbud may include a shape and/or materials configured to hold the earbud within a portion of a user’s ear, or a personal speaker system may include a neckband to support and maintain acoustic driver(s) near the user’s ears, shoulders, etc.

FIG. 1 and FIG. 2 together illustrate multiple example placements of sensors, any one or more of which may be included in certain examples. FIG. 1 illustrates an interior microphone 120 in the interior of the left earcup 104. In some examples, an interior microphone may additionally or alternatively be included in the interior of the right earcup 102. FIG. 2 illustrates the headphones 100 from the left side and shows details of the left earcup 104 including a pair of front sensors 202, which may be nearer a front edge 204 of the earcup, and a rear sensor 206, which may be nearer a rear edge 208 of the earcup. The right earcup 102 may additionally or alternatively have a similar arrangement of front and rear sensors, though in examples the two earcups may have a differing arrangement in number or placement of sensors. Additionally, various examples may have more or fewer sensors 202, 206 in various placement about a headphone, which may include sensors on the headband 106, a neckband, chin strap, etc., and in some examples sensors may be provided as an accessory sensor worn elsewhere on the user’s body, such as a shoe or a waistband, for instance. While not specifically illustrated in FIGS. 1 and 2, one or more acoustic drivers may be provided in each of the right and left earcups 102, 104 to provide audio playback to the user.

The sensors 202, 206 may be of various types, such as acoustic sensors (e.g., microphones, ultrasonic, sonar systems, etc.), electromagnetic or radio sensors (e.g., radio receivers, radar devices, etc.) and/or light sensors (infrared, visual, etc.). While the reference numerals 120, 202, and 206 are used to refer to one or more sensors, the visual element illustrated in the figures may, in some examples, represent a port where signals (acoustic, radio, light) may enter to ultimately reach a sensor, which may be internal and not physically visible from the exterior. In examples, sensors may be immediately adjacent to the interior of such a port, or may be removed from by a distance, and may include an waveguide between a port and an associated sensor. In some examples, only one sensor may be necessary.

FIG. 3 is a schematic block diagram of an example headphone system 300, such as for the headphones 100. The headphone system 300 includes a controller 310 that provides signals to acoustic drivers 320 for audio playback. The controller 310 includes a processor 312, an audio interface 314, and may include a battery 316 and/or additional components. The audio interface 314, for example, may be a wired input or may be a wireless interface configured, at least in part, to receive program content signals for audio playback. The controller 310 may also receive signals from various sensors, including the sensors 202, 206 (see, e.g., FIGS. 1-2), and the microphones 120, for example. In some examples, one or more of the sensors 202, 206 may be acoustic sensors, such as microphones, and may provide sensor signals for detection of environmental conditions. In some examples, one or more of the sensors 202, 206 may be microphones that provide a feedforward signal of an active noise control (ANC) or acoustic noise redaction (ANR) system, which may be implemented by the controller 310, for example. Similarly, in some examples, one or more of the microphones 120 may be feedback microphones of an ANC/ANR system. In various examples, one or more of the sensors 202, 206 may be a radio or other electromagnetic sensor (e.g., antenna). In some examples, one or more of the sensors 202, 206 may be light sensitive.

In various examples, the sensors 202, 206 may be of various types and may provide one or more signals to be processed in various ways by the controller 310 to detect various environmental conditions, and in some examples the controller 310 may further characterize a condition as to a level of danger or threat, and take an action to intervene such that the user may have increased awareness to the condition.

In some examples, one or more of the sensors 202, 206 may be microphones whose signals are processed by the controller 310 to detect the presence of various acoustic characteristics or signatures, such as sirens, vehicle noises (such as tires, engines, horns, hum or vibration of electric motors, etc.), footsteps, etc. that may indicate the approach or presence of a vehicle or a person. In some examples, one or more of the sensors 202, 206 may be an electromagnetic or radio receiver and may receive (e.g., monitor for) electromagnetic signals from vehicles or people, such as signals transmitted for the purpose of proximity detection, blind-spot or curb detection transponders, electromagnetic engine noise, radio transmitters (e.g., commercial two-way radio, citizens band (CB) radio, emergency services communications, etc.), cellular signals, vehicle-to-vehicle (V2V) communications systems, and the like. In some examples, one or more of the sensors 202, 206 may be light sensitive or light detecting sensors and may receive (or detect) light output from vehicles or people, such as approaching headlights, headlamps, emergency vehicle strobes, and the like.

In various examples, one or more of the sensors 202, 206 may be located anywhere on a personal acoustic device of various form factors, or may be provided as an accessory sensor worn elsewhere on the user’s body, such as a helmet mount or waistband, for instance. In some examples, sensors may be positioned for sensitivity and directionality for detection of the relevant environment. For example, sensors positioned near the rear of the user (e.g., back of earcup, headband, neckband, helmet, etc.) may provide enhanced response for people and vehicles approaching from behind the user. In the case of a microphone, rear positioning may enhance response to acoustic signals coming from behind the user and also may provide shielding from wind noise. For example, some user activities (e.g., bicycling, motorcycling) may propel the user rapidly through the air and otherwise create significant wind noise. A microphone positioned near the rear, or otherwise behind the head or body of the user, may have increased response and/or acoustic sensitivity due to the relative wind shielding provided,

e.g., by the user’s head. In some examples, a wind baffle or wind blocking component is provided to shield one or more of the sensors 202, 206 from wind.

In various examples, the controller 310 may analyze one or more microphone signals to detect environmental conditions, and in some examples may further characterize the condition as to a level of danger. For example, a low rumbling engine may indicate a larger vehicle, such as a truck, which may be a more significant danger than a small vehicle, which may have higher-pitched engine acoustics. Additionally, characteristics of the acoustic signals generated by a vehicle may indicate not only a relative size of the vehicle but also a relative speed of the vehicle. Accordingly, a rapidly approaching truck may be distinguished from a relatively lightweight moped approaching at lower speed, for example, and the rapidly approaching truck may present more of a danger to a bicyclist.

A bicyclist (or a runner, jogger, hiker, etc.) wearing headphones may not hear an approaching vehicle, e.g., when listening to an audio program and/or using noise reduction features of a personal acoustic system. Listening to music at high volume and/or using particularly aggressive (and effective) noise reduction features may reduce the user s situational awareness even further, significantly increasing risk of not noticing a developing danger, such as an approaching vehicle. Accordingly, in response to detecting a condition that may be a danger (or a condition to which the user should otherwise have increased awareness), the controller 310 may implement an intervention tending to increase the user s awareness to his/her surroundings. For example, in response to an acoustic detection of what may be a rapidly approaching truck, the controller 310 may act to intervene by any of various responses, which may include reducing audio playback volume, switching noise control features to less aggressive settings, disabling noise control features, providing an audible notification (e.g. a beep, tone, spoken message, etc.) through one or more of the acoustic drivers 320, providing a visual notification (e.g., flashing a light source within the peripheral vision of the user), and/or amplifying certain external sounds (such as within a particular spectrum or within the defining bounds of the acoustic signature, e.g., the sound of the vehicle engine). In some examples, the controller 310 may provide an intervening action intended for reception at the detected condition or perceived source of danger, such as flashing a light intended to be seen by an approaching vehicle, transmitting a radio or sonar signal intended to be received by specialized receivers on a vehicle (e.g., vehicle proximity detection systems) or the like.

FIG. 4 illustrates an example method 400 that may be implemented by various headphone systems, such as the headphone system 300. A controller or processing system, such as the controller 310, receives from one or more sensors signals that are indicative of the environment (block 410), and analyzes the signals (block 420) to detect an environmental condition (block 430). Upon detection of an environmental condition (which may optionally be characterized into various levels of significance, in some examples), the controller or processing system may provide intervention (block 440) in the form of one or more actions, as previously described.

In some examples, a controller such as the controller 310 may process signals from various sensors (e.g., sensors 202, 206) in a manner to provide enhanced and/or reduced response in certain directions, e.g., via array processing techniques. For example, and as illustrated in FIG. 5, an example processing method 500 is shown. The sensors 202, 206 may provide two or more individual signals 502 to be combined with array processing, e.g., by the controller 310, to implement a beam former 510 to produce a signal 512 having enhanced response in a particular beam direction. Additionally or alternatively, two or more of the individual signals 502 may be combined with array processing, e.g., by the controller 310, to implement null steering 520 to produce a signal 522 having reduced response in a particular null direction. The signal 512 may be produced with an enhanced response in a selected direction, such as to the rear of the user, for example, to enhance the likelihood of detection of an environmental condition to the rear. In various examples, any desired direction may be implemented by the beam former 510 for enhanced response within the signal 512. In some examples, the signal 522 may be a reference signal. For example, to confirm that an acoustic sound is coming from a certain direction (a direction enhanced by the beam former 510), the signal 522 may be formed by null steering 520 to have reduced response in the same certain direction. Accordingly, if a signal such as a rumbling truck engine is present in the signal 512 but absent from the signal 522, such may provide higher confidence that there is a rumbling truck engine in the certain direction. In various examples, array processing to provide beam forming and null steering may be applied to various forms of sensors.

In various examples, any of the functions or methods, and any components of systems (e.g., the controller 310), described herein may be implemented or carried out in a digital signal processor (DSP), a microprocessor, a logic controller, logic circuits, and the like, or any combination of these, and may include analog and/or digital circuit components and/or other components with respect to any particular implementation. Functions and components disclosed herein may operate in the digital domain and certain examples include analog-to- digital (ADC) conversion of analog signals provided, e.g., by microphones, despite the lack of illustration of ADC’s in the various figures. Any suitable hardware and/or software, including firmware and the like, may be configured to carry out or implement components of the aspects and examples disclosed herein, and various implementations of aspects and examples may include components and/or functionality in addition to those disclosed.

Examples disclosed herein may be combined with other examples in any manner consistent with at least one of the principles disclosed herein, and references to“an example,” “some examples,”“an alternate example,”“various examples,”“one example” or the like are not necessarily mutually exclusive and are intended to indicate that a particular feature, structure, or characteristic described may be included in at least one example. The appearances of such terms herein are not necessarily all referring to the same example.

It is to be appreciated that examples of the methods and apparatuses discussed herein are not limited in application to the details of construction and the arrangement of components set forth in the following description or illustrated in the accompanying drawings. The methods and apparatuses are capable of implementation in other examples and of being practiced or of being carried out in various ways. Examples of specific implementations are provided herein for illustrative purposes only and are not intended to be limiting. Also, the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use herein of “including,”“comprising,”“having,” “containing,”“involving,” and variations thereof is meant to encompass the items listed thereafter and equivalents thereof as well as additional items. References to“or” may be construed as inclusive so that any terms described using“or” may indicate any of a single, more than one, and all of the described terms. Any references to front and back, left and right, top and bottom, upper and lower, and vertical and horizontal are intended for convenience of description, not to limit the present systems and methods or their components to any one positional or spatial orientation.

Having described above several aspects of at least one example, it is to be appreciated various alterations, modifications, and improvements will readily occur to those skilled in the art. Such alterations, modifications, and improvements are intended to be part of this disclosure and are intended to be within the scope of the invention. Accordingly, the foregoing description and drawings are by way of example only, and the scope of the invention should be determined from proper construction of the appended claims, and their equivalents.