Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
EMBODIED SOUND DEVICE AND METHOD
Document Type and Number:
WIPO Patent Application WO/2021/142136
Kind Code:
A1
Abstract:
A tactile audio system and associated methods are disclosed. In one example, the tactile audio system includes a signal processor configured to separate an input signal into a transient group and a sustained group. In one example, the tactile audio system includes a signal processor configured to separate an input signal into a plurality of frequency bands for each of the transient group and the sustained group. A number of transducers are provided to generate a tactile response corresponding to portions of the separated audio signal.

Inventors:
CHAGAS PAULO C (US)
CASTRO ETHAN (US)
Application Number:
PCT/US2021/012522
Publication Date:
July 15, 2021
Filing Date:
January 07, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UNIV CALIFORNIA (US)
CHAGAS PAULO C (US)
CASTRO ETHAN (US)
International Classes:
H04R25/00; G06F3/01; H04R3/12
Domestic Patent References:
WO2018177608A12018-10-04
Foreign References:
US20170325039A12017-11-09
US20070183605A12007-08-09
US20100232623A12010-09-16
US20070121955A12007-05-31
US20120313765A12012-12-13
Attorney, Agent or Firm:
SCHEER, Bradley W. et al. (US)
Download PDF:
Claims:
Claims

1. A tactile audio system, comprising: an audio processing device configured to separate an audio input into: a transient group and a sustained group; a plurality of frequency bands for each of the transient group and the sustained group; a first number of amplifiers corresponding to one or more of the frequency bands; and a second number of transducers coupled to the first number of amplifiers.

2. The tactile audio system of claim 1, wherein two transducers are coupled to a one amplifier.

3. The tactile audio system of claim 1, wherein the plurality of frequency bands includes four frequency bands.

4. The tactile audio system of claim 3, wherein the four frequency bands include one high frequency band, two mid-range bands, and one low frequency band.

5. The tactile audio system of claim 1, wherein communication of one or more of the plurality of frequency bands between the audio processing device and a transducer is configured to be wireless.

6. The tactile audio system of claim 1, further including one or more feedback sensors to calibrate a frequency response of a subsequent object attached to one or more of the second number of transducers.

7. The tactile audio system of claim 6, wherein the audio processing device is configured to compare the frequency response of the subsequent object, compare the frequency response of the subsequent object to a target frequency response, and calculate a calibration filter.

8. The tactile audio system of claim 7, wherein the calibration filter is an inverse filter.

9. The tactile audio system of claim 1, wherein the second number of transducers are coupled to a backpack like form factor.

10. The tactile audio system of claim 1, wherein the second number of transducers are coupled to a floor panel.

11. The tactile audio system of claim 1, wherein the second number of transducers are coupled to a wall panel.

12. The tactile audio system of claim 1, further including one or more audio speakers to augment the tactile response from the second number of transducers.

Description:
EMBODIED SOUND DEVICE AND METHOD

Claim of Priority

[0001] This application claims the benefit of priority to U.S. Provisional

Patent Application Serial No. 62/958,189, filed on January 7, 2020, and to U.S. Provisional Patent Application Serial No. 62/958,218, filed on January 7, 2020, each of which is incorporated by reference herein in its entirety.

Technical Field

[0002] Embodiments described herein generally relate to audio systems, components, and methods.

Background

[0003] Over 1 in 10 persons have a hearing loss. Only surpassed by arthritis and heart disease, hearing loss is the third most prevalent health issue in older adults. It may vary from mild to profound, but every age group experiences a fair amount of hearing loss. The cause behind losing one’s hearing before an advanced age can range from exposure to loud noises - the concerts from youth, a loud movie, a car crash - to simply being horn without the ability to hear due to congenital effects (CDC). Hearing loss impacts nearly every element of the human experience: physical health, emotional and mental health, social skills, self-esteem, and more. It is desired to have methods and devices that provide an audio experience for those with hearing loss, as well as enhance an audio experience for others with minimal or no hearing loss.

Brief Description of the Drawings

[0004] FIG. 1 shows a component of a tactile audio system in accordance with some example embodiments.

[0005] FIG. 2 shows a diagram of a tactile audio system in accordance with some example embodiments.

[0006] FIG. 3 shows components of a tactile audio system in accordance with some example embodiments. [0007] FIG. 4 shows components of a tactile audio system in accordance with some example embodiments.

[0008] FIG. 5 shows components of a tactile audio system in accordance with some example embodiments.

[0009] FIG. 6 shows a partially enclosed space and a tactile audio system in accordance with some example embodiments.

[0010] FIG. 7 shows a diagram of a tactile audio system in accordance with some example embodiments.

[0011] FIG. 8 shows a diagram of a tactile audio system in accordance with some example embodiments.

[0012] FIG. 9 show ' s a diagram of a tactile audio system in accordance with some example embodiments.

[0013] FIG. 10A shows components of a tactile audio system in accordance with some example embodiments.

[0014] FIG. 10B show's additional components of the tactile audio system from Figure 10A in accordance with some example embodiments.

[0015] FIG. 11 show's an example form factor of a tactile audio system in accordance with some example embodiments.

Description of Embodiments

[0016] The following description and the drawings sufficiently dlustrate specific embodiments to enable those skilled in the art to practice them. Other embodiments may incorporate structural, logical, electrical, process, and other changes. Portions and features of some embodiments may be included in, or substituted for, those of other embodiments. Embodiments set forth in the claims encompass ail available equivalents of those claims.

[0017] Example systems and methods in the present disclosure help those with hearing impairment to experience music in high-fidelity, and can help those without impairment to get closer to music, without further damaging ears. This idea was made possible by advances in bone conduction, signal processing, sonic excitation and tactile transducer technologies.

[0018] Existing solutions have a number of drawbacks associated with them that are overcome by example systems and methods of the present disclosure. [0019] High-Fidelity (Hi-Fi) hearing aids are nothing new, they allow for wider frequency ranges and much larger dynamic ranges (speech is somewhere between 35-55 decibels (dB) in dynamic range as opposed to a rock concert that can be up to 120 dB). These types are more suitable for music than older hearing aid models, but use existing air-driver technology which can further damage eardrums if turned up to a preferred level.

[0020] Aftershokz - A headphone company with proprietary patents which enables bone-conduction sound via small transducers placed on the cheekbones. Bone conduction devices allow sound perception to happen without blocking the ear canal, allowing for outside world sounds to enter the ear, uninterrupted. However, this solution does not provide good response at lower frequencies.

[0021] Cochlear - A company focused on implantable hearing devices including the Cochlear implant, Baha bone conduction sound processors, and Carina middle ear implants. This solution is expensive, and invasive in comparison to examples of the present disclosure.

[0022] Currently , the only way for someone with hearing impairment to experience music, is to turn up the volume loud enough to feel the speakers shaking. This causes further hearing damage, and can disturb many others, while not actualizing the visceral feeling the listener is looking for.

[0023] What is disclosed is a platform based on tactile reproduction of sound - sound you can feel, as well as hear. We discovered that the tactile panel setup actually performed quite well in reproducing most audible frequencies, and with slight adjustments of either material and/or shape, we could adjust the resonant sonic characteristics of the device to taste. Example systems and methods disclosed allow users to interact with their favorite songs in a whole new perspective - via touch.

[0024] Tactile audio can provide a different embodied experience:

[0025] 1 ) The user can have an embodied experience, physically interacting with sound.

[0026] 2) Resonant materials can be touched and felt. This provides a myriad of uses from entertainment to therapeu tic.

[0027] 3) Resonant materials provide more dynamic range with less compression. This is good for dynamic instruments such as drums. [0028] 4) Large surfaces can be used in interactive entertainment environments such as theme parks, movie theaters, or hazardous environments (e.g. water park).

[0029] 5) Can be installed in walls to have an ‘invisible speaker’ effect,

[0030] 6) Can introduce some experience in unconventional places such as a tilted drafting desk.

[0031] 7) Heightened immersivity in 4D+ entertainment experiences

- the body can experience the stimulus that is presented,

[0032] In one example, a sound signal (music, microphone, etc.) is amplified via an audio amplifier, and sent to tactile transducers. In one example, transducers are voice coils that are not attached to a cone to drive air. Instead, the transducers have a small mass that is attached (e.g. via adhesive or other mounting method) to a resonant material that is ideally stiff, lightweight, and mildly flexible at extremities. The mass oscillates which transfers energy to the resonant material, which naturally amplifies certain frequencies due to the resonate nature of the material and the shape the device takes, and transmits the resulting vibrations to the air waves, to be heard, or via touch.

[0033] This concept can be sealable from a small personal resonant box, to a large platform that can support multiple people. We have developed a relatively compact device design that will contain all necessary components for full-frequency tactile reproduction and immersive entertainment consumption. Figure 1 shows an example system 100 for use in a tactile audio system. In the example shown, system 100 includes four exciters 102, an audio processing device 104, frequency splitter 106, and amplifier 108.

[0034] Figure 2 shows a diagram of one example tactile audio system

200 using one or more convertors such as system 100 from Figure 1. The system 2.00 includes an example audio processing device that includes a digital signal processor (DSP) 202 that is coupled to input 201. In the example shown, the DSP 202 divides the audio into a number of frequency bands. In the example, of Figure 2, four frequency bands are shown. A first amplifier 204 is assigned to the first frequency band, and passes on an amplified signal to first transducers 206. A second amplifier 210 is assigned to the second frequency band, and passes on an amplified signal to second transducers 212. A third amplifier 220 is assigned to the third frequency band, and passes on an amplified signal to third transducers 222. In one example, one or more frequency bands may be transmitted wirelessly. Figure 2 shows the fourth amplifier 234 receiving input from a transmitter 230 that communicates with a fourth receiver 232 and in turn to fourth amplifier 234.

[0035] in one example involving a rear-mounted system, the

Computation Audio Processing 2.02 applies to the source audio 201 as follows: [0036] Computational instructions will check if the source audio received is 2.0 stereo sound or a surround format to determine how to route the audio channels. If it is 2.0 stereo sound, it will crossover the frequencies directly in the filtering stage, reproducing the full frequency signal . If it is a surround sound format, the computational instructions will separate and process the left and right rear surround channels from a 5.1 format, or a combination of the surround channels from a 7.1 format, plus the unprocessed Low Frequency Emitting (LFE) bass channel if there is any (Th e LFE channel will not be processed through filtering stage unless forced into Stereo 2.0 mode). For example, if the source audio was originally in 5.1 surround sound format, the computational instructions will only deal with the rear left and right speaker audio plus the 1 LFE channel, and pass the remaining channels to an external audio system that would reproduce the non-surround channels normally.

[0037] For each speaker channel (Left and Right) the audio will undergo digital signal processing, such as transient detection and extraction. "Transient" sounds are short bursts of audio with a relatively high amplitude in the onset. Examples of transient audio are drum beats and gunshots. The computational instructions will determine if there are transient sounds in the audio based on the above description, and will trigger the separation process on the detected transient sounds. Transient sounds will be separated from the "sustained" sounds in the audio source, "Sustained" sounds are defined as 'sounds that continue or are prolonged for an extended period or without interruption'. Examples of sustained sounds are ambient noise or any instrument that requires air to initiate sound (brass or pipe organ). The computational instructions will determine if there are sustained sounds in the audio based on the above description, and will preserve the detected sustained sounds from the transient extraction process. At the end of transient detection and extraction, sounds will be assigned into "Transient" and "Sustained" groups for each speaker channel . Each assigned group will be able to adjust the individual volumes of the different frequency bandwidth ranges in the next step.

[0038] The "Transient" sound group and the "Sustained" sound group for each speaker will be each divided into four frequency response categories: High, High-Mid, Low-Mid, and Low/Sub. "High” frequencies are frequencies around 5000 Hz and beyond. "High-Mid" frequencies are frequencies around 800 - 5000 Hz. "Low-Mid" frequencies are frequencies around 250 - 800 Hz. "Low/Sub" frequencies are frequencies around 5 - 250 Hz.

[0039] The volume for each frequency response category from both groups is adjusted by a DSP (digital signal processor). This can be adjusted manually post-install. For example, if a user were to enhance vibratory experience from the explosions in their video game, they could increase the volume in the "Low/Sub" category' in the "Transient" group. On the other hand, if the same user wanted to drown out the droning of rainfall in the game, they could turn down the "High-Mid" volume in the "Sustained" group. After the volume adjustment of each frequency range is applied, the output of the DSP will be sent, to the respective tactile drivers for each category. In an example system, there are two groups of drivers: One for Transient sound and one for Sustained sound. For each group, there is a dri ver for each of the four frequency range categories (High, High-Mid, Low-Mid, Low/Sub). 4 drivers per group x 2 groups per speaker x 2 speakers = 16 drivers total. For example, the adjusted volume output of the "Low/Sub" category of the "Transient" group of the Left speaker will be heard from the "Low/Sub" driver from the "Transient" group of the Left speaker.

[0040] In one example, computational instructions in one or more of the systems disclosed further include one or more sensors to provide feedback and facilitate adjustment of resonance in an object or panel that transducers or exciters are coupled to. Sensor examples include, but are not limited to, piezo sensors, MEMS sensors, traditional microphones, vibration sensors, etc. in one example, an input signal is passed through unprocessed at an initial operation. When a calibration cycle begins, an impulse response is played through one or more transducers, and captured using sensors. An initial frequency response of data is ploted against a target response. One example of a target response includes a flat response of total frequency capabilities of hardware being implemented - useful for critical audio situations. In one example. Filters are then calculated between the initial frequency response captured and the target frequency response. In one example, the filter is an inverse filter. Other filter modes are also within the scope of the invention. In one example, inverse filters are applied via software at line level to an output signal before amplification to tactile transducers.

[0041] While we are using a sensory experience that is fundamentally different than traditional sound equipment (touch versus hearing), ultimately it will be entering a similar field and can be compared to traditional air drivers, although a caveat remains to consider the range of ability this technology has. [0042] 1) Our sound system designs are more efficient at producing a balanced reproduction of a large frequency range in near-field or direct touch applications at the same power rating.

[0043] 2 ) Compared to near field monitors of the same power rating, our distributed mode loudspeaker (DML) design has a larger area of off-axis frequency accuracy which is desirable for untreated listening rooms, and allows for great acoustic performance without precise placement.

[0044] 3) Bass module can provide lower frequencies at more powerful levels, while being more efficient at a fraction of the cost.

[0045] 4) Less negative side effects of listening to music for extended periods of time. Wider dispersion of sound means less hearing damage. [0046] 5) Unlocks ability to perceive frequencies previously lost due to hearing damage/auditory issues.

[0047] Tw o example modes that may be used alone, or combined include: 1) an interactive sound room, where listeners can touch interactive materials, and 2) an immersive mobile system, where the listener is wearing materials which retain the ability to transmit micro-vibrations to the body, resulting in intelligible sound. Both systems may be converted into a hybrid system, incorporating traditional speaker components, in order to augment the immersion experience of the environment as a whole. Sound Room:

[0048] In one example, a high-fidelity sound room uses several high-mid range tactile transducers, called exciters, with multiple low-end tactile transducer elements, called bass shakers to achieve a high-fidelity tactile experience. When these transducers are affixed to a structure, the structure will provide a visceral embodied experience that users can touch and interact sound vibrations in speech and music. In one example, this system processes the signal with a computational audio processing unit, to give a more realistic sound impression across audible frequency range.

Hybrid Sound Room

[0049] This tactile system can be augmented with a traditional speaker array in order to balance out difficult frequencies, and to demonstrate pure tactile experience against traditional speaker (plus subwoofer) setup.

Mobile:

[0050] In one example, a mobile system is provided that can be worn as the user moves about. The mobile system consists of a wearable high-frequency tactile transducers, such as a bone-conduction headset, worn just over the front of the ears, or exciters pressed against a user s body, in addition to a wearable low-frequency tactile transducer, worn on the chest/back. Like the sound room, this mobile rig is processed via computational audio processing in order to achieve a tactile, tonal balance with an extended bass response. In one example, the mobile system is combined with augmented reality (AR) to provide a high quality, immersive tactile experience to their project.

Hybrid Mobile Rig:

[0051] The tactile system can he augmented with a pair of traditional in- ear drivers (4 channels total) to provide a fully-immersive reality, suitable for true mobile multichannel virtual reality (VR).

Multichannel/Ambisonics Integration:

[0052] In one example, a system can reproduce multichannel media

(multiple discrete channels of audio) with four discrete channels of audio from a media example — for example a movie with surround sound, or multichannel music prepared for multichannel surround sound systems.

[0053] Figure 3 shows a panel 300 that may be equipped with a system as described in the present disclosure. In one example, the panel 300 may be integrated into a room environment, for example, a floor panel, ceiling panel, wall panel, free standing partition panel, etc. A 4’ x 8’ dimension is shown as an example, however, the invention is not so limited.

[0054] Figure 4 shows a panel system 400 according to one example. In the example of Figure 4, the system 400 is used as a floor, although the invention is not so limited. Figure 4 show's a transducer 402 coupled to a panel 401, which may be similar to panel 300 from Figure 3. In one example, an insulating material 404 is included between panel 401 and a supporting frame 406. In one example, rails 408 are included with insulated bushings for frictionless support.

[0055] Figure 5 shows another panel system 500 according to one example. Figure 5 show s an example location of transducer 502 on a panel 501. In one example, in order to achie ve a good frequency response from the transducer 502 within the panel 501, a location of the transducer 502 is 1/3 of a distance from each end, as illustrated in Figure 5. Other locations may be optimal for panels or objects of different geometries. The present inventors have discovered that 1/3 of a distance from each end, as illustrated in Figure 5 provides excellent response for rectangular flat panels as illustrated in Figures 3- 5.

[0056] Figure 6 shows a partially enclosed space 600 using panels with transducers as described in the present disclosure. Figure 6 shows a floor panel 602, a back panel 604, and side panels 606. One or more transducers 601 are shown coupled to one or more of the panels 602, 604, 606. In one example, an immersive tactile experience is provided by the enclosed space 600,

[0057] Figure 7 shows another example system 700. In Figure 7, the audio input comes from a 3.5mm AUX input into a computational audio processor which will ran the various audio computational instructions. Once processed, the audio signal enters a 5 way line-level spliter, which will send the signal to different amplifiers. The Lp-2020a+ is a 20W amplifier with 2 outputs. Each output contains two 16 AWG conductive wires which connect to the positive and negative terminals of the transducer. The first Lp-2020a+ amplifier outputs to two TS-T110 tweeters which handle the high frequencies. The next two Lp-2020a+ amplifiers output to 4 DAEX32QMB-4 exciters which handle mid frequencies. The TP A3116 is a 100W amplifier which connects two 18 AWG wires to the positive and negative terminals of the ΪBEAM. The IBEAM is a bass transducer which handies the lowest frequencies and gives low frequency tactile audio sensation.

[0058] Figure 8 shows another example system. In Figure 8, the system

800 may be implemented into a gaming chair. The DSP-408 will handle the bandpassing and EQ DSP computational instructions. The remote control (RC) and Bluetooth module (BT4.0) allow us to interface with the digital signal processor. This module is powered by 12V. The TPA3116D2 is a 100W, 24V amplifier which outputs the high frequency sounds to the TS-Tl 10. The TPA3116 is a 200W, 24V amplifier that outputs wide-band mid frequency ranges to four DAEX30HES-4 tactile exciters, and tactile low frequencies to the IBEAM. The satellite distributed mode loudspeakers (DML) desk monitors dri ven by the DAEX25FHE-4 modules will communicate wirelessly via Wi-Fi or Bluetooth. In order to do this, an analog to digital converter (ADC) would be needed to convert the analog sound into a digital signal. Once converted, it is passed into a microcontroller unit (MCU) and it is sent to a Wi-Fi or Bluetooth chip. The digital data will travel between modules via USART. Once transmitted wirelessly, the digital data is sent to an MCU and passed on to a digital to analog converter (DAC) which converts the digital signal back to an audio signal to be amplified by another TPA3116D2 is a 100W, 24V amplifier. Figure 9 shows another example system 900.

[0059] In one example, exciters are attached to a chair, for example a gaming chair. In one example, 4 exciters are attached to the bottom of a chair.

In one example, a high-pass filter is coupled to the exciters to avoid overexcursion. In one example, an I-BEAM bass shaker is added, utilizing 3 separate amplifiers - 2 x 20 watt/channel RMS for the front and rear stereo pairs, and a 2 x 100 watt/channel amplifier, bridged to mono for the bass shaker. This system, when receiving the same signal via a 3.5mm splitter (through Bluetooth receiver) provides control over tone of exciters, and amount of bass shaking via dedicated amp for bass shaker. In one example, a low-pass filter is included for the bass shaker, so that it does not inefficiently reproduce high frequencies, as well as a stronger high-pass filter so that exciters are not experiencing over- excursion from inefficiently reproducing lower bass frequencies.

[0060] In one example, a system includes the upper mid-high frequencies at ear level. In one example, satellite L and R tweeter channels are placed on a desk to provide upper mid-high frequencies at ear level. In one example some or all connections between systems are wireless, for example, Wi-Fi or Bluetooth. Other wireless communication protocols are also within the scope of the invention.

[0061] In one example, an I-BEAM is used as the low frequency device.

In one example, the alternatives don't have the extended frequency range that the I-BEAM has demonstrated. The I-BEAM is a large exciter that exhibits a strong low frequency range, but other bass-shakers certainly excel in the deep sub-bass frequency ranges as well. In one example, with a 4-ohm I-BEAM in mind, and 4x Exciters for the high frequencies, a passive crossover is implemented at 300 hz @4ohm since 400hz can be an unpleasant resonant frequency in gaming chair implementations.

[0062] In one example a DSP is included before the amplifier, which have 3 dedicated channels (L+ R+ Sub) in a custom amplifier. In one example a double number of exciters are included (4 on each side) where the excitation components can be wired in parallel and series, giving 4ohm nominal load again, allowing to revert back to passive crossover for the time being. A computational audio process provides the ability to digitally set the crossovers, and behaviors for each channel of amplified signal to the transducers.

[Q063] Although the invention is not so limited, example applications of systems and methods described in the present disclosure, include gaming/entertainment industries - where users would benefit from better frequency response at lower sound pressure levels. Inclusion of tactile components means soft far away sounds (i.e. footsteps) are perceivable sooner. Another example application includes internet of things (IOT) industry - the hue ‘invisible speaker' can be created by installing this technology inside wails and other home structures so that a user can have whole -home audio with sound emanating from the home itself. Another example application includes Healthcare - Deaf/Hard of Hearing (D/HH) and intellectual and Developmentally Delayed (IDD) populations have noted benefits from tactile reproduction of sound. Preliminary research findings of tactile pitch perception indicate that better pitch perception can be gained from prototype devices using this technology. Another example application includes extending the frequency response of objects and/or panels, etc. that emanate sound to traditional sound sources as well in live mixing environment.

[0064] In one example, a backpack-sized module is provided that can be strapped to back of chair, and deliver full audio range through and around chair. Paired with a small front-facing sound-bar, a multimedia computer station will have a wide frequency response, even at low* sound pressure levels, and perceived multichannel audio spatialization effects,

[0065] A sound engineer can use the intelligent computational instructions described herein to help assign ideal processing and parameters to a live sound source i.e. piano/guitar/vocal performer. Many instances of the instructions can be used in parallel to process an ensemble consisting of many sound sources. A second layer of the instructions can also help overall mix of complex harmonic sound environment i.e. live band. In one example, the intelligent computational instructions is embodied in standalone software. Standalone software may be encoded on a computer readable medium, such as a CD or DVD ROM. Standalone software may also include computer readable instructions downloaded over the internet that are encoded on an end user’s computer or device. In one example the intelligent computational instructions are embodied in a host-dependent plug-in. In one example, the intelligent computational instructions are embodied in a hardware enclosure that is optimized with integrated audio interface and professional input/output options. [0066] Figures 10A and 10B show' an additional example of a system

1000 using devices and method of the present disclosure. In Figure 10A, a housing 1001 is shown with a number of components contained within. A first number of transducers 1002 and a second number of transducers 1003 are shown. In one example, the first number of transducers 1002 are higher frequency transducers, such as mid or high frequency. In one example, the second number of transducers 1003 are lower frequency transducers, such as bass frequency. In one example, at least one sensor 1010 is included. The example of Figure 10A show's two sensors 1010, although the invention is not so limited. One sensor 1010, or more than two sensors are within the scope of the invention. In one example, the sensors 1010 include piezoelectric sensors.

Other sensor types, such as MEMS, etc. are also within the scope of the invention. In one example, the sensors 1010 are used to provide feedback data for computational instructions as described above. The sensors 1010 and computational instructions facilitate adjustment of resonance in an object or panel that transducers or exciters are coupled to.

[0067] Figure 10B shows additional components added to the housing

1001 from Figure 10A. In one example, an audio processing device 1004 is included. Components that may be housed in the audio processing device 1004 include a signal processor to separate transient and sustained groups from an audio signal. The audio processing device 1004 may also include one or more amplifiers. In one example, an analog to digital convertor (ADC) 1012 is included. In one example, a digital to analog convertor (DAC) 1014 is included. In one example, an input/output (I/O) circuit 1016 is included. In one example one or more components, such as the ADC 1012, DAC 1014 and I/O circuit 1016 may be consolidated into fewer components, or a single board or device enclosure.

[0068] Figure 11 shows system 1100 including a housing 1101 similar to the housing 1001 from Figure 10A and 10B. In one example similar components are included in housing 1101 as those in housing 1001. In the example of Figure 11, the housing 1101 is coupled to one ormore straps 1102 to provide a backpack like form factor. In one example, the system 1100 may be worn by an individual directly. In one example, the system 1100 may be attached to an intermediary structure, such as a chair. Al though a backpack like form factor is shown as an example, the invention is not so limited. Other structures and form factors may be used to incorporate audio processing devices and transducers as described in examples above.

[0069] To better illustrate the method and apparatuses disclosed herein, a non-limiting list of embodiments is provided here:

[0070] Example 1 includes a tactile audio system. The system includes an audio processing device configured to separate an audio input into a transient group and a sustained group and a plurality of frequency bands for each of the transient group and the sustained group. The system includes a first number of amplifiers corresponding to one or more of the frequency bands and a second number of transducers coupled to the first number of amplifiers.

[0071] Example 2 includes the tactile audio system of example 1, wherein two transducers are coupled to a one amplifier.

[0072] Example 3 includes the tactile audio system of any one of examples 1-2, wherein the plurality of frequency bands includes four frequency bands.

[0073] Example 4 includes the tactile audio system of any one of examples 1-3, wherein the four frequency bands include one high frequency band, two mid-range bands, and one low frequency band.

[0074] Example 5 includes the tactile audio system of any one of examples 1-4, wherein communication of one or more of the plurality of frequency bands between the audio processing device and a transducer is configured to be wareless.

[0075] Example 6 includes the tactile audio system of any one of examples 1-5, further including one or more feedback sensors to calibrate a frequency response of a subsequent object attached to one or more of the second number of transducers.

[0076] Example 7 includes the tactile audio system of any one of examples 1-6, wherein the audio processing device is configured to compare the frequency response of the subsequent object, compare the frequency response of the subsequent object to a target frequency response, and calculate a calibration filter.

[0077] Example 8 includes the tactile audio system of any one of examples 1-7, wherein the calibration filter is an inverse filter.

[0078] Example 9 includes the tactile audio system of any one of examples 1-8, wherein the second number of transducers are coupled to a backpack like form factor.

[0079] Example 10 includes the tactile audio system of any one of examples 1-9, wherein the second number of transducers are coupled to a floor panel.

[0080] Example 11 includes the tactile audio system of any one of examples 1-10, wherein the second number of transducers are coupled to a wall panel. [0081] Example 12 includes the tactile audio system of any one of examples 1-11, further including one or more audio speakers to augment the tactile response from the second number of transducers.

[0082] Throughout this specification, plural instances may implement components, operations, or structures described as a single instance. Although indi vidual operations of one or more methods are illustrated and described as separate operations, one or more of the individual operations may he performed concurrently, and nothing requires that the operations be performed in the order illustrated. Structures and functionality presented as separate components in example configurations may be implemented as a combined structure or component. Sim ilarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements fall within the scope of the subject matter herein.

[0083] Although an overview of the inventive subject matter has been described with reference to specific example embodiments, various modifications and changes may be made to these embodiments without departing from the broader scope of embodiments of the present disclosure. Such embodiments of the inventive subject matter may he referred to herein, individually or collectively, by the term “invention ' ’ merely for convenience and without intending to voluntarily limit the scope of this application to any single disclosure or inventive concept if more than one is, in fact, disclosed.

[0084] The embodiments illustrated herein are described in sufficient detail to enable those skilled in the art to practice the teachings disclosed. Other embodiments may be used and derived therefrom, such that structural and logical substitutions and changes may be made without departing from the scope of this disclosure. The Detailed Description, therefore, is not to be taken in a limiting sense, and the scope of various embodiments is defined only by the appended claims, along with the full range of equivalents to which such claims are entitled.

[0085] As used herein, the term “or” may be construed in either an inclusive or exclusive sense. Moreover, plural instances may be provided for resources, operations, or structures described herein as a single instance. Additionally, boundaries between various resources, operations, modules, engines, and data stores are somewhat arbitrary, and particular operations are illustrated in a context of specific illustrative configurations. Other allocations of functionality are en visioned and may fall within a scope of various embodiments of the present disclosure. In general, structures and functionality presented as separate resources in the example configurations may be implemented as a combined structure or resource. Similarly, structures and functionality presented as a single resource may be implemented as separate resources. These and other variations, modifications, additions, and improvements fall within a scope of embodiments of the present disclosure as represented by the appended claims. The specification and drawings arc, accordingly, to be regarded in an illustrative rather than a restrictive sense.

[0086] The foregoing description, for the purpose of explanation, has been described with reference to specific example embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the possible example embodiments to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The example embodiments were chosen and described in order to best explain the principles involved and their practical applications, to thereby enable others skilled in the art to best utilize the various example embodiments with various modifications as are suited to the particular use contemplated.

[0087] It will also be understood that, although the terms “first,”

“second,” and so forth may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first contact could be termed a second contact, and, similarly, a second contact could be termed a first contact, without departing from the scope of the present example embodiments. The first contact and the second contact are both contacts, but they are not the same contact,

[0088] The terminology used in the description of the example embodiments herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. As used in the description of the example embodiments and the appended examples, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise . It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.

[0089] As used herein, the term “if” may be construed to mean “when” or “upon” or “in response to determining” or “in response to detecting,” depending on the context. Similarly, the phrase “if it is determined” or “if [a stated condition or event] is detected” may be construed to mean “upon determining” or “in response to determining” or “upon detecting [the stated condition or event]” or “in response to detecting [the stated condition or event],” depending on the context.