Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
AUDIO DATA WITH EMBEDDED TAGS FOR RENDERING CUSTOMIZED HAPTIC EFFECTS
Document Type and Number:
WIPO Patent Application WO/2020/176383
Kind Code:
A1
Abstract:
A method and a device for providing information for haptic effects is provided. The device obtains an audio signal including audio data and control information, the audio signal including one or more audio segments and the control information including one or more haptic effect tags indicating at least one of (1) whether to generate a haptic effect for the one or more audio segments, and (2) at least one parameter of an algorithm for converting the one or more audio segments into the haptic effect. The device provides the audio data to render audio via an audio output system based on the one or more audio segments. The device converts the audio data into a haptic signal to generate one or more haptic effects based on the one or more haptic effect tags and provides the haptic signal to generate the one or more haptic effects via a haptic device.

Inventors:
WU QIONG (JP)
SABOUNE JAMAL (CA)
DA COSTA HENRY (CA)
BARBER COLIN (CA)
FLEMING JASON D (US)
OLIVER HUGUES-ANTOINE (CA)
Application Number:
PCT/US2020/019429
Publication Date:
September 03, 2020
Filing Date:
February 24, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
IMMERSION CORP (US)
International Classes:
G06F3/01
Foreign References:
EP2846226A22015-03-11
KR101901364B12018-11-22
EP2955609A12015-12-16
US201962810273P2019-02-25
Other References:
TILKI J F ET AL: "Encoding a hidden auxiliary channel onto a digital audio signal using psychoacoustic masking", SOUTHEASTCON '97. ENGINEERING NEW NEW CENTURY., PROCEEDINGS. IEEE BLACKSBURG, VA, USA 12-14 APRIL 1997, NEW YORK, NY, USA,IEEE, US, 12 April 1997 (1997-04-12), pages 331 - 333, XP010230825, ISBN: 978-0-7803-3844-9, DOI: 10.1109/SECON.1997.598705
Attorney, Agent or Firm:
HA, Jun S. (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A method of providing information for haptic effects, the method comprising:

obtaining, by at least one processor, an audio signal including audio data and control information, the audio signal including one or more audio segments and the control information including one or more haptic effect tags indicating at least one of (1) whether to generate a haptic effect for the one or more audio segments, and (2) at least one parameter of an algorithm for converting the one or more audio segments into the haptic effect;

providing the audio data to render audio via an audio output system based on the one or more audio segments; and

converting the audio data into a haptic signal to generate one or more haptic effects based on the one or more haptic effect tags; and

providing the haptic signal to generate the one or more haptic effects via a haptic device.

2. The method of claim 1, wherein the one or more audio segments in the audio data includes a plurality of audio segments and the one or more haptic tags in the control information includes a plurality of haptic effect tags associated with the plurality of audio segments.

3. The method of claim 2, wherein each audio segment of the plurality of audio segments is associated with a respective audio group of a plurality of audio groups based on one or more characteristics of each audio segment, and

wherein the plurality of haptic effect tags are respectively associated with the plurality of audio groups.

4. The method of claim 1 , wherein the audio signal is encoded audio data including the control information embedded in the audio data.

5. The method of claim 1, wherein obtaining the audio signal comprises:

generating the control information including the one or more haptic effect tags; and combining the control information with the audio data to generate the audio signal.

6. The method of claim 5, wherein obtaining the audio signal further comprises:

determining whether the control information for the audio data is present,

wherein generating the control information and embedding the control information in the audio data are performed when the control information is not present.

7. The method of claim 1, wherein obtaining the audio signal comprises:

receiving the audio signal from an external device,

wherein the control information is included with the audio data in the audio signal by the external device.

8. The method of claim 1, wherein converting the control information into the haptic signal is performed by the at least one processor in a remote device remote from a user device, and wherein the audio data and the haptic signal are provided, by the at least one processor, to the user device to render the audio and to generate the one or more haptic effects.

9. The method of claim 1, wherein extracting the audio data and the control information and converting the control information into the haptic signal are performed by the at least one processor at at least one of an application level, an application programming interface level, a framework level, a hardware abstraction layer, a driver level, and a firmware level of a user device.

10. The method of claim 1, further comprising:

updating the audio signal by adding additional information to the control information embedded in the audio signal to generate an updated audio signal,

wherein the control information provided to convert the audio data into the haptic signal is extracted from the updated audio signal.

11. The method of claim 1, wherein converting the audio data into the haptic signal is performed by the at least one processor in a user device, and wherein the method further comprises: rendering the audio based on the audio data via the audio output system associated with the user device; and generating the one or more haptic effects by the haptic device based on the haptic signal, the haptic device being associated with the user device.

12. The method of claim 1, wherein each of the one or more haptic effect tags includes one or more parameters including at least one of:

description information of the one or more audio segments,

haptic device information for the one or more audio segments indicating at least one of a type of haptic device to be used to render the haptic effect and a particular haptic device to be used to render the haptic effect,

a source of the one or more audio segments,

a haptic track associated with the one or more audio segments,

information about a software application used to render the one or more audio segments, haptic effect time information indicating time for generating the haptic effect, and user information associated with the one or more audio segments.

13. The method of claim 1, wherein the one or more audio segments include a plurality of audio segments, the method further comprising:

associating one or more first audio segments of the plurality of audio segments with a with a first output audio channel and one or more second audio segments of the plurality of audio segments with a second output audio channel;

wherein providing the audio data comprises outputting the one or more first audio segments via the first output audio channel and outputting the one or more second audio segments via the second output audio channel,

wherein the control information includes first control information associated with the first output channel and second control information associated with the second output channel,

wherein converting the audio data into the haptic signal includes converting the one or more first audio segments into a first haptic signal based on the first control information and converting the one or more second audio segments into a second haptic signal based on the second control information, and

wherein providing the haptic signal includes providing the first haptic signal to render a first portion of the one or more haptic effects via a first haptic output component of the haptic output device and providing the second haptic signal to render a second portion of the one or more haptic effects via a second haptic output component of the haptic output device.

14. The method of claim 1, wherein the control information is included in the audio data as a plurality of pulses having different magnitudes and different frequencies to indicate different parameters for the one or more audio segments.

15. The method of claim 1, wherein each of the one or more audio segments has a same time duration.

16. The method of claim 1, wherein at least two of the one or more audio segments at least partially overlap in time.

17. A method of providing audio data with haptic information, comprising:

segmenting, by at least one processor, the audio data into one or more audio segments; generating control information for the one or more audio segments, the control information including one or more haptic effect tags associated with the one or more audio segments, the one or more haptic effect tags indicating at least one of (1) whether to generate a haptic effect for the one or more audio segments, and (2) at least one parameter of an algorithm for converting the one or more audio segments into the haptic effect; and

providing an audio signal including the audio data and the control information to render the audio data and the one or more haptic effects based on the one or more audio segments.

18. The method of claim 17, wherein the one or more audio segments in the audio data includes a plurality of audio segments and the one or more haptic tags in the control information includes a plurality of haptic effect tags associated with the plurality of audio segments.

19. The method of claim 18, wherein generating the control information comprises:

associating each of the plurality of audio segments with a respective audio group of a plurality of audio groups based on one or more characteristics of each audio segment; and

associating the plurality of haptic effect tags respectively with the plurality of audio groups.

20. The method of claim 19, wherein the plurality of audio groups includes a voice group for audio samples having a human voice, a non-voice group for audio samples without a human voice, and a silence group for audio samples with no sound.

21. The method of claim 20, wherein at least one audio segment of the one or more audio segments that matches with the voice group based on the audio-to-haptics model is associated with a haptic effect tag indicating that no haptic effect for the at least one audio segment is to be rendered.

22. The method of claim 17, further comprising:

generating the audio signal by combining the control information with the audio data.

23. The method of claim 17, wherein generating the control information comprises:

applying an audio-to-haptics model to the audio data to associate one or more haptic effect tags with the one or more audio segments, the audio-to-haptics model being trained based on known associations between a plurality of audio samples and a plurality of sound types.

24. The method of claim 17, wherein each of the one or more haptic effect tags includes one or more parameters including at least one of:

description information of the one or more audio segments,

haptic device information for the one or more audio segments indicating at least one of a type of haptic device to be used to render the haptic effect and a particular haptic device to be used to render the haptic effect,

haptic effect time information indicating time for generating the haptic effect,

a source of the one or more audio segments,

a haptic track associated with the one or more audio segments,

information about a software application used to render the one or more audio segments, haptic effect time information indicating time for generating the haptic effect, and user information associated with the one or more audio segments.

25. The method of claim 17, wherein the control information is included with the audio data as a plurality of pulses having different magnitudes and different frequencies to indicate different parameters for the one or more audio segments.

26. The method of claim 17, wherein each of the one or more audio segments has a same time duration.

Description:
AUDIO DATA WITH EMBEDDED TAGS FOR RENDERING CUSTOMIZED HAPTIC

EFFECTS

CROSS-REFERENCE TO RELATED APPLICATION(S)

[0001] This application claims the benefit of U.S. Provisional Application Serial No. 62/810,273, entitled “AUDIO DATA WITH EMBEDDED TAGS FOR RENDERING CUSTOMIZED HAPTIC EFFECTS” and filed on February 25, 2019, which is expressly incorporated by reference herein in its entirety.

FIELD OF THE INVENTION

[0002] The present invention is directed to a method and a device for providing audio output and haptic effects based on audio data.

BACKGROUND

[0003] Electronic device manufacturers strive to produce a rich interface for users. In some interface devices, kinesthetic effects (e.g., active and resistive force feedback) and/or tactile effects (e.g., vibration, texture, temperature variation, and the like) are provided to the user. In general, such effects are collectively known as“haptic feedback” or“haptic effects.” Haptic effects provide cues that intuitively enhance user experience with an electronic device. For example, the haptic effects may provide cues to the user of the electronic device to alert the user to specific events, or provide realistic tactile effects to generate greater sensory immersion with media rendered for the user. Haptic effects have been increasingly incorporated in a variety of portable electronic devices, such as cellular telephones, smart phones, tablets, portable gaming devices, and a variety of other portable electronic devices. Some devices generate haptic effects associated with audio to enhance experience of a user listening to the audio. SUMMARY

[0004] One aspect of the embodiments herein relates to a method of providing information for haptic effects. The method may be performed by a processor of an electronic device. The method comprises obtaining, by the processor, an audio signal including audio data and control information, the audio signal including one or more audio segments and the control information including one or more haptic effect tags indicating at least one of (1) whether to generate a haptic effect for the one or more audio segments, and (2) at least one parameter of an algorithm for converting the one or more audio segments into the haptic effect. The method further comprises providing the audio data to render audio via an audio output system based on the one or more audio segments. The method further comprises converting the audio data into a haptic signal to generate one or more haptic effects based on the one or more haptic effect tags and providing the haptic signal to generate the one or more haptic effects via a haptic device.

[0005] One aspect of the embodiments herein relates to a device for providing information for haptic effects. The device may include a memory and at least one processor coupled to the memory. The at least one processor may be configured to obtain an audio signal including audio data and control information, the audio signal including one or more audio segments and the control information including one or more haptic effect tags indicating at least one of (1) whether to generate a haptic effect for the one or more audio segments, and (2) at least one parameter of an algorithm for converting the one or more audio segments into the haptic effect. The method further comprises providing the audio data to render audio via an audio output system based on the one or more audio segments. The at least one processor may be further configured to provide the audio data to render audio via an audio output system based on the one or more audio segments. The at least one processor may be further configured to convert the audio data into a haptic signal to generate one or more haptic effects based on the one or more haptic effect tags, and to the haptic signal to generate the one or more haptic effects via a haptic device.

[0006] One aspect of the embodiments herein relates to a method of providing audio data with haptic information. The method may be performed by a processor of an electronic device. The method comprises segmenting, by the processor, the audio data into one or more audio segments. The method further comprises generating control information for the one or more audio segments, the control information including one or more haptic effect tags associated with the one or more audio segments, the one or more haptic effect tags indicating at least one of (1) whether to generate a haptic effect for the one or more audio segments, and (2) at least one parameter of an algorithm for converting the one or more audio segments into the haptic effect. The method further comprises providing an audio signal including the audio data and the control information to render the audio data and the one or more haptic effects based on the one or more audio segments.

[0007] One aspect of the embodiments herein relates to a device for providing information for haptic effects. The device may include a memory and at least one processor coupled to the memory. The at least one processor may be configured to segment the audio data into one or more audio segments. The at least one processor may be further configured to generate control information for the one or more audio segments, the control information including one or more haptic effect tags associated with the one or more audio segments, the one or more haptic effect tags indicating at least one of (1) whether to generate a haptic effect for the one or more audio segments, and (2) at least one parameter of an algorithm for converting the one or more audio segments into the haptic effect. The at least one processor may be further configured to provide an audio signal including the audio data and the control information to render the audio data and the one or more haptic effects based on the one or more audio segments.

BRIEF DESCRIPTION OF THE DRAWINGS

[0008] The foregoing and other features, objects and advantages of the invention will be apparent from the following description of embodiments hereof as illustrated in the accompanying drawings. The accompanying drawings, which are incorporated herein and form a part of the specification, further serve to explain the principles of the invention and to enable a person skilled in the pertinent art to make and use the invention. The drawings are not to scale.

[0009] FIG. 1 A illustrates a block diagram of an electronic device configured to generate control information for audio data, according to an embodiment hereof.

[0010] FIG. IB illustrates a block diagram of an electronic device configured to provide audio data and a haptic signal based on an audio signal, according to an embodiment hereof. [0011] FIG. 1C illustrates a block diagram for encoding the audio data by embedding the control information in the audio data according to an embodiment hereof.

[0012] FIG. ID illustrates a block diagram showing an illustrative process for creating the audio signal by embedding the control information with the audio data in the audio signal and providing the audio data and the haptic signal for playback based on the audio signal, according to an embodiment hereof.

[0013] FIGS. 2A and 2B illustrate diagrams showing various interactions between electronic devices, according to various embodiments hereof.

[0014] FIG. 3 illustrates a diagram showing a carrier being modulated with a sine wave.

[0015] FIG. 4 is a diagram of a haptically enabled system/device 400, according to an example embodiment hereof.

[0016] FIGS. 5A-5F illustrate various examples for generating haptic effect tags for the control information, according to various embodiments hereof.

[0017] FIG. 6 illustrates an architecture used to provide audio and haptic playback, according to an embodiment hereof.

[0018] FIGS. 7A-7D are various examples of the architecture of FIG. 6 for a device running an operating system, according to various embodiments hereof.

[0019] FIG. 8 depicts a flow diagram for a method 800 of providing audio data with haptic control information.

[0020] FIG. 9 depicts a flow diagram for a method 900 of providing control information for haptic effects.

DETAILED DESCRIPTION

[0021] The following detailed description is merely exemplary in nature and is not intended to limit the invention or the application and uses of the invention. Furthermore, there is no intention to be bound by any expressed or implied theory presented in the preceding technical field, background, brief summary or the following detailed description.

[0022] In an aspect, embodiments described herein generally relate to the rendering of haptic effects based on audio data. In particular, embodiments described herein propose systems and methods to selectively provide one or more haptic effects based on an audio signal including audio data and control information related to the audio data. The audio data may be divided into discrete audio segments. Control information may be provided for the audio segments, where the control information indicates whether and/or how to provide a haptic effect for the audio segments. When an audio signal with the audio data and the control information is received, the audio data may be converted to a haptic signal based on the control information, so as to provide a haptic effect in conjunction with an audio output rendered based on the audio data.

[0023] Various approaches may be used to provide a haptic effect associated with audio data by analyzing the audio data, while playing the audio data in real time. In some instances, the haptic effect for the audio data may be pre-authored by a content provider of the audio data or a haptic designer for the audio data, before the audio data is provided to the user. In other instances, haptic effects may be determined based on analysis of the audio data, before or after the audio data is sent by the content provider. For example, audio data such as audio data in gaming applications may convey the sounds of multiple sound sources including background music, voice, gunshots of different users or players, explosions, as well as the sounds of other objects in the game, such as sounds of a user’s steps, or sounds of a car, plane, tanker, etc. In some applications, haptic effects associated with audio data may be provided without considering many of the features and information related to the audio data, such as the sound sources. For example, one approach may provide a haptic effect for the audio data as long as the audio data is within a particular frequency range and/or a particular amplitude range. However, audio may be perceived to include a variety of sounds with different characteristics. Further, using the audio data as a whole for conversion to a haptic effect without considering different audio segments may prevent consideration of individual characteristics of the audio segments for haptic conversion. As a result, the haptic effects produced by such an approach do not readily distinguish between audio segments with different characteristics to be converted to haptic effects for respective users, and thus may not correspond well with the overall user’s experience with the haptic effects provided for the audio being rendered.

[0024] By contrast, an embodiment of the present disclosure provides control information for one or more audio segments to indicate at least one of (1) whether to generate a haptic effect for the one or more audio segments, and (2) at least one parameter of an algorithm for converting the one or more audio segments, such that the one or more audio segments may be converted into a haptic signal based on the control information. The audio data and the control information may be provided as an audio signal including both the audio data and the control information. In embodiments, the audio data and/or the control information may be encoded. The audio data may be provided to an audio output device to render the audio. Further, the audio data may be converted into a haptic signal based on the control information. The generated haptic signal may be provided to a haptic output device to generate a haptic effect. The embodiments may be readily applied to gaming environments as well as to any haptically augmented content (e.g., mobile applications, advertisements, trailers, music, movies, etc.) in which the haptic effects are generated based on audio data.

[0025] FIG. 1A illustrates a block diagram of an electronic device 100 configured to generate control information for audio data, according to an embodiment hereof. The components within the electronic device 100 illustrated with dotted line blocks may be optional components that may or may not be present in the electronic device 100. In an aspect, the electronic device 100 may include a control circuit 102. The control circuit 102 may be configured to perform various tasks, such as controlling other components of the electronic device 100, processing data and information, executing algorithms and/or software applications, etc. The electronic device 100 further includes a communication interface 108 configured to communicate with another device. The electronic device 100 includes a memory 110 to store information, such as audio data, software, drivers, etc.

[0026] In an aspect, the control circuit 102 may be implemented as one or more processors (e.g., a microprocessor), a field programmable gate array (FPGA), application specific integrated circuit (ASIC), programmable logic array (PL A), or other control circuit. The control circuit 102 may be part of a general purpose control circuit for the electronic device 100, such as a processor for executing an operating system or for implementing other functionality of the electronic device 100, or the control circuit 102 may be a control circuit dedicated to performing the features described herein. In an aspect, the control circuit 102 may include any amplifier circuit, any digital to analog converter (DAC), or any other circuit for creating a drive signal that can drive a haptic output device. In an aspect, the control circuit 102 may be configured to perform various tasks by executing instructions stored in the memory.

[0027] In embodiments discussed herein, control circuits are described as including “components.” Such“components” may be computer instructions operating on the control circuits. The computer instructions may be stored on a memory associated with the control circuit. Although described as separate components, these may be functional aspects of the control circuit, rather than discrete physical aspects. In further embodiments, components described herein in association with control circuits may include discrete physical hardware.

[0028] In an aspect, the memory 110 may be a non-transitory computer-readable medium, and may include read-only memory (ROM), random access memory (RAM), a solid state drive (SSD), a hard drive, or other type of memory. In FIG. 1, the memory 110 may store a haptic track and instructions that can be executed by the control circuit 102 to generate a control signal according to an aspect herein. In an aspect, the memory 110 may store other information and/or modules.

[0029] In some aspects, the electronic device 100 may include an audio output system 112 configured to render audio data to provide audio to the user. The audio output system 112 may include various audio output components, such as an audio output component A 114 and an audio output component B 118. For example, the audio output component A 114 may be a left speaker for playing audio associated with one channel and the audio output component B 116 may be a right speaker for playing audio associated with another channel. In another example not shown in FIG. 1, the number of audio output components may be greater than two. For example, for a 5.1 channel speaker set, there may be six audio output components, five speakers for playing audio at five different channels and a subwoofer to provide low-frequency audio. In additional aspects, the audio output system 112 may include hardware for outputting audio to an external device for rendering. For example, the audio output system 112 may include a headphone jack and/or wireless antenna for transmitting the audio data for rendering on a device external to the electronic device 100.

[0030] In some aspects, the electronic device 100 may include a haptic output device 118 configured to generate a haptic effect. The haptic output device 118 may generate a haptic effect based on a haptic signal provided by one or more of the control circuit 102, the communication interface 108, and the memory 110. In an aspect, the haptic output device 118 may be a standard- definition (SD) haptic output device and/or a high-definition (HD) haptic output device. The SD haptic output device may include an actuator (e.g., eccentric rotating mass (ERM) actuator) that is designed to be driven with a DC signal, or an actuator (e.g., linear resonant actuator (LRA)) designed to be driven at only a single frequency. The HD haptic output device may include a piezoelectric actuator, electroactive polymer (EAP) actuator, any other smart material actuator, or a wideband LRA. The piezoelectric actuator, EAP actuator, and wideband LRA may each be designed to be driven in a range of frequencies having a nonzero bandwidth, i.e., a bandwidth that is greater than a single frequency, and may each have a structure that supports a nonzero acceleration bandwidth for motion of that structure. In some instances, the HD haptic output device may include an ERM actuator that is designed to be driven with an alternating current (AC) signal and is further designed to be driven in a range of frequencies having a nonzero bandwidth. In such an example, the ERM actuator may further have a nonzero acceleration bandwidth. In an aspect, a haptic output device 118 may include a vibrotactile haptic actuator configured to generate a haptic effect. In an aspect, a haptic output device 118 may include an ultrasound emitter configured to generate an ultrasound-based haptic effect. In an aspect, a haptic output device 1 18 may have a single resonant frequency or multiple resonant frequencies. In an aspect, a haptic output device 118 may have no resonant frequency. In an aspect, the haptic output device 118 may include one or more electrodes for providing electrical stimulus such as electrostatic stimulus as a haptic effect or for providing a friction effect. In an aspect, a haptic output device 118 may provide a haptic effect by deformation.

[0031] The electronic device 100 may generate the control information and may provide an audio signal including control information and audio data. In such an aspect, the electronic device 100 may be a device used by a content provider or a content designer to provide the signal to a user device, or may be a user device used to receive the audio signal to render the audio data. As used herein,“audio signal” refers to a signal in an audio format. Audio signals may include analog signals configured to carry audio information. Audio signals may also be digital signals encoded according to any suitable encoding protocol, e.g., WAV, MP3, WMA, MIDI, Real Audio, etc.

[0032] As used herein,“audio data” refers to information contained within the audio signal intended to be rendered as an audio output. Audible outputs from an audio output device, such as a speaker, include the audio data contained in the audio signal. As used herein, “control information” refers to additional information contained within the audio signal. The control information is included within the audio signal and is designed to provide additional information about the audio data and/or about haptic effects to be rendered according to the audio data. In embodiments, the control information may be embedded within the audio data and/or within the audio signal. In embodiments, the control information may be encoded into the audio signal with the audio data. The control information may be included in the audio signal in such a way as to be imperceptible to a listener if the audio signal is rendered at an audio output device. For example, in an analog audio signal, the control information may be included at frequencies outside of normal audible range and/or may be include at amplitudes below human perception. In a digital audio signal, the control information may be included as a byte code that, when the rest of the audio signal is decoded for playback, is either not decoded or does not result in a perceptible audio signal.

[0033] In particular, the control circuit 102 of the electronic device 100 may segment the audio data into one or more audio segments. In one aspect, each of the audio segments may have the same duration in time, for example, 20 msec. In another aspect, two or more of the audio segments may have different durations in time. For example, if the audio segments are segmented based on particular sounds of the audio segments, a duration for an audio segment for one type of sound (e.g., a gunshot) may be different than a duration for an audio segment for another type of sound (e.g., an explosion). In one aspect, two or more of the audio segments may at least partially overlap in time. For example, an audio segment (e.g., for a gunshot) from one source may at least partially overlap with another audio segment (e.g., for an explosion) from another source. Each audio segment may represent both a specific portion in time of the audio data. Each audio segment may also represent specific frequency components related to a particular sound within the audio data. For example, if an explosion occurs in the audio data concurrently with background music, each particular sound or source may be segmented into separate audio segments, one audio segment with the explosion sound and another audio segment with the background music.

[0034] The control circuit 102, via a control information component 104, may generate control information for the one or more audio segments, where the control information includes one or more haptic effect tags associated with the one or more audio segments. The one or more haptic effect tags may indicate at least (1) whether to generate a haptic effect for the one or more audio segments, and/or (2) at least one parameter of an algorithm for converting the one or more audio segments into the haptic effect.

[0035] Throughout the specification, the term“haptic effect tag” may be used to describe control information that is included within an audio signal with audio data. Other terms, such as flags, watermarks, control information, identifiers, metadata, and others also may be used without departing from the scope of the invention. Such haptic effect tags add control information to the audio data that is used to generate haptic output. The haptic effect tags are further configured to match audio data to different types of haptic effects. Accordingly, a higher level of haptic effect tuning is provided through the use of haptic effect tags.

[0036] In an aspect, the one or more audio segments in the audio data may include multiple audio segments and the one or more haptic tags in the control information may include multiple haptic effect tags associated with the multiple audio segments. In embodiments, each audio segment may have zero, one, or more associated haptic effect tags. Each of the multiple haptic effect tags may indicate at least one of (1) whether to generate the haptic effect for one or more respective audio segments of the multiple audio segments, and (2) the at least one parameter of the algorithm for converting the one or more respective audio segments of the multiple audio segments into the haptic effect. In additional aspects, the multiple haptic effect tags may further include parameters indicative of one or more of description information of the one or more audio segments, haptic device information for the one or more audio segments indicating at least one of a type of haptic device to be used to render the haptic effect and a particular haptic device to be used to render the haptic effect, a source or particular sound associated with the one or more audio segments, a haptic track associated with the one or more audio segments, information about a software application used to render the one or more audio segments, haptic effect time information indicating a time for generating the haptic effect, and user information associated with the one or more audio segments.

[0037] In some aspects, the control circuit 120 may generate the control information by associating each of the multiple audio segments with a respective audio group of audio groups based on one or more characteristics of a respective audio segment. Each audio group of the audio groups may be associated with one or more of the audio segments. The haptic effect tags may be respectively associated with the audio groups. In some cases, the audio data may be segmented into multiple audio segments corresponding to respective sound sources (i.e., particular sounds within the audio data). In a gaming example, the audio data may be segmented into several groups of audio segments, including: Group 1 - the sound from the main player; Group 2 - the sound from the enemy players; Group 3 - the sound from the main player’s team members; Group 4 - the sound of background music; and Group 5 - the sound from others sources such as remote objects (e.g., a car, tank, plane, etc.). In another example, the audio data may be segmented into different groups based on audio frequency ranges or audio channels. For example, a sound from one audio channel may be associated with a first group and a sound from another channel may be associated with a second group. In further examples, the audio data may be segmented into several groups according to sound characteristics of the audio data. For example, a gunshot may have characteristic audio data associated with it, and the control circuit 120 may separate out the frequency components of the gunshot from the frequency components of other sounds occurring at the same time (e.g., character voices and background music). Examples of the audio segments being associated with respective groups are provided infra in reference to FIGS. 5A-5F.

[0038] In an aspect, each group of audio segments may be associated with a haptic effect tag that indicates a respective channel for rendering haptic effects for a respective user. For example, haptic effects may be rendered to a user based upon one or more groups, such as Group 1, or another combination of groups. In another example, haptic effects corresponding to a gunshot by a gun may be provided only at a user device used by a player firing the gun and another user device used by another player being hit by the gunshot. In this example, user devices used by players other than the users involved with the gunshot may not provide haptic effects related to the gunshot. As such, other player or users may not experience unneeded haptic effects. [0039] In an aspect, the control information may be generated when designing media data (e.g., audiovisual, gaming, AR/VR, interfaces effects, etc.) that includes the audio data, through a dedicated plugin in a design interface (e.g., Pro Tools, Unity, etc.). In some configurations, the control information may be generated automatically (e.g., using an algorithm using machine learning configured to analyze the audio data to generate haptic tags, such as those discussed above) or may be generated by the designer using a dedicated user interface (e.g., check boxes that can be checked or unchecked). In this aspect, the electronic device 100 that generates the control information may be a device used by a content provider/designer and may be configured to provide the audio signal including the audio data and control information to another device such as a user device or a playback device to for audio/haptic playback.

[0040] In an aspect, the control information may be generated after the design process, automatically, or through a designer input. In this aspect, the electronic device 100 that generates the control information may be a user device or a playback device used for audio/haptic playback (e.g., a mobile device, tablet, game console, etc.) and/or an intermediate device or platform (e.g. server, cloud server, etc.) interacting with the user device or the playback device. In an example, if the control information is generated by the user device or the playback device used for audio/haptic playback, the control information may indicate a nature of a software application or an application program interface (e.g., a sound pool, a media player, etc.) that is used by the user device or the playback device to request the audio data to be played.

[0041] In an aspect, the control circuit 102, via an audio data encoding component 106, may encode the audio data by embedding the control information in the audio data, to generate an audio signal. Thus, the audio signal may include the audio data with the control information embedded therein. Embedding control information into the audio data in the audio signal may involve, for example, with an analog audio signal, adding the control information signal to the audio data to generate the audio signal. Embedding control information into the audio data in the audio signal may also involve encoding control information with the audio data in the audio signal. By including the control information with the audio data, the control information becomes a part of the audio signal. As such, the control information included with the audio data may make its way down an audio stack of a device (e.g., the electronic device 100 or another device receiving the audio signal) as an audio signal, to an audio integrated circuit (IC) implementing real-time audio- to-haptic conversion.

[0042] An example showing how the audio data may be encoded is provided in FIG. 1C. FIG. 1C illustrates a block diagram for encoding the audio data by embedding the control information in the audio data according to an embodiment hereof. A processor in the control circuit 102 may be configured to execute various programs, such as gaming or other applications, depicted as application(s) 130. The application(s) 130 may reside in the memory 110 or may reside outside of the memory 110 or the electronic device 100. As part of its functionality, the application(s) 130 are configured to generate media data including audio data and/or video data, such as the audio data 132. The audio data encoding component 106 or a processor in the control circuit 102 may embed the control information in the audio data 132 to generate the audio signal 134. The audio signal 134 may be stored in the memory 110.

[0043] One or more of the following approaches may be used to embed the control information in the audio data. According to one approach, the control information may be embedded into the audio signal as multiple pulses (e.g., a collection of pulses), such as a sine wave, a square wave, etc ., where the pluses have different magnitudes (e.g., 0 to 1) and frequencies so as to encode different parameters and messages. In another approach, the control information may be encoded as a pattern of“0” valued samples spaced with a specific number of non-zero audio samples (i.e., zeroing some samples in the audio signal). The number of null/”0” samples and the spaces between these may be used to encode different parameters/messages. According to another approach, the control information may be embedded along the least significant bits of the audio data. In each of these approaches, an encoded header may be used to describe the control information embedded in the audio signal.

[0044] In an aspect, the frequencies for the control information embedded in the audio data may be set to be in an inaudible range to humans (e.g, higher than 20 kHz, such as 22 kHz) or may be set to be in a very low frequency range (e.g, less than 20 Hz). The frequency for the control information may be high-frequency greater than 20 kHz is outside of human hearing range but still within a supported frequency range (e.g, using a 48 kHz audio sampling rate, and a 24 kHz maximum audio frequency). The audio data may have little or no content in the inaudible range, especially if the audio data is compressed. In another aspect, the frequencies for the control information embedded in the audio data may be within an audible range to humans (e.g., greater than 20Hz and less than 20 kHz). In such an aspect, when providing the audio signal, the control information that is audible may be filtered out such that the audio data without the control information is provided for audio playback.

[0045] In an example, by modulating a high frequency carrier with a sine wave having a lower frequency than the high frequency carrier, a single bit of control information may be encoded at a time. For example, as shown in FIG. 3, a 22 kHz carrier may be modulated with a 1 kHz sine wave, where each cycle of the 1 kHz sine wave may be ON or OFF so as to encode a single bit of control information at a time.

[0046] A set window length in time may be used for each audio segment. By dividing the audio data into 20 ms windows, 20 bits of data can be used per window to encode the control information. In one example data structure, the 20 bits of data may include 8 synchronization bits including a 4-bit signature (e.g., 1010) and a 4-bit checksum (e.g, XOR of other 16 bits in 4-bit groupings) as well as 12 data bits including a 3 -bit force value for each 5 ms interval in the 20 ms window.

[0047] In another example data structure, the 20 bits of data may include 8 synchronization bits including a 6-bit signature (e.g, 101100) and a 4-bit checksum (e.g, XOR of other 16 bits in 4- bit groupings) as well as 2 bits of control data and 8 additional data bits. The structure of the 8 additional data bits may include: 8-bit force value for 20 ms window; 4-bit force values for each 10 ms sub-window in the 20 ms window; 8-bit values to select an audio to haptic conversion algorithm and/or algorithm parameters, etc.

[0048] The control circuit 102 may provide the audio signal including the audio data and the control information to render the audio data and the one or more haptic effects based on the one or more audio segments. The control circuit 102 may provide the audio signal to any suitable device, including servers and cloud servers for storage and eventual transmission to an end user, including user devices for storage prior to playback, and including user devices for immediate playback, as well as others. [0049] Using audio samples, machine learning may be used to train a model that can be applied to automatically generate the control information for the audio data. For example, machine learning may be used to detect different types or sources of sounds and to generate the control information based on the different types of sounds. In particular, the control circuit 102 may generate the control information by applying an audio-to-haptics model to the audio data. The audio-to-haptics model may be trained based on a known association of each of the audio samples with a respective sound or type of sound. In further aspects, the audio-to-haptics model may be trained according to known associations between the audio samples and known audio groups. These associations may be based on one or more characteristics of the audio samples, where, for example, each of the audio groups is associated with multiple audio samples having a same or similar characteristic.

[0050] In this aspect, the control information may be generated based on the application of the audio-to-haptics model by associating the haptic effect tags with the audio segments based on the audio-to-haptics model. The audio samples and the audio-to-haptic model may be stored in the memory 110. In one example, the electronic device 100 may collect or receive audio samples and train the audio-to-haptics model based on the association of the audio samples with a respective group of audio groups. In another example, the audio-to-haptics model may be trained outside of the electronic device 100 and the electronic device 100 may receive the audio-to-haptics model after the training is completed.

[0051] In one example, the audio samples may be generated by segmenting sample audio data into multiple audio samples. The audio samples may be converted into a spectrogram, and the audio- to-haptics model may be trained to determine which type of sound is classified as which group or which type or source of sound, based on the spectrogram. After the training, if the audio-to-haptics model is applied to the audio segments, the audio segments may be automatically associated with respective groups or respective types or sources of sound based on the audio-to-haptics model.

[0052] In an example, the audio groups may include a voice group for audio samples having a human voice, a non-voice group for audio samples without a human voice, and a silence group for audio samples with no sound. For example, audio samples including dialogues or monologues may be classified as a voice group, audio samples including a gunshot sound may be classified into a non-voice group, and audio samples with no or little sound may be classified into a silence group. In an aspect, at least one audio segment of the audio segments that matches with the voice group based on the audio-to-haptics model may be associated with a haptic effect tag indicating that no haptic effect for the at least one audio segment is to be rendered. Thus, for example, no haptic effect may be provided for the audio segment that matches with the voice group.

[0053] In an aspect, each of the one or more haptic effect tags in the control information may include one or more parameters including at least one of: a flag indicating whether to provide a haptic effect for the one or more audio segments, at least one parameter of an algorithm for converting the one or more audio segments to the haptic signal, description information of the one or more audio segments, haptic device information for the one or more audio segments indicating at least one of a type of haptic device to be used to render the haptic effect or a particular haptic device to be used to render the haptic effect, a source of the one or more audio segments, a haptic track associated with the one or more audio segments, information about a software application used to render the one or more audio segments, and user information associated with the one or more audio segments. More details with regard to the use of the above information are provided infra in reference to FIG. IB.

[0054] FIG. IB illustrates a block diagram of an electronic device 150 configured to provide audio data and a haptic signal based on an audio signal, according to an embodiment hereof. The components within the electronic device 150 illustrated with dotted line blocks may be optional components that may or may not be present in the electronic device 150. In an aspect, the electronic device 150 may include a control circuit 152. The control circuit 152 may be configured to perform various tasks, such as controlling other components of the electronic device 150, processing data and information, executing algorithms and/or software applications, etc. The electronic device 150 further includes a communication interface 158 configured to communicate with another device. The electronic device 100 includes a memory 160 to store information, such as audio data, software, drivers, etc.

[0055] In some aspects, the electronic device 150 may include an audio output system 162 configured to render audio data to provide audio to the user. The audio output system 162 may include various audio output components, such as an audio output component A 164 and an audio output component B 168. For example, the audio output component A 164 may be a left speaker for playing audio associated with one channel and the audio output component B 166 may be a right speaker for playing audio associated with another channel. For example, for a 5.1 channel speaker set, there may be six audio output components, five speakers for playing audio at five different channels and a subwoofer to provide low-frequency audio. In additional aspects, the audio output system 162 may include hardware for outputting audio to an external device for rendering. For example, the audio output system 162 may include a headphone jack and/or wireless antenna for transmitting the audio data for rendering on a device external to the electronic device 100.

[0056] In some aspects, the electronic device 150 may include a haptic output device 168 configured to render a haptic effect. The haptic output device 168 may render a haptic effect based on a haptic signal provided by one or more of the control circuit 152, the communication interface 158, and the memory 157.

[0057] Structures of the control circuit 152, the communication interface 158, the memory 160, the audio output system 162 with an audio output component A 164 and an audio output component B 166, and the haptic output device 168 may be the same as or similar to those of the control circuit 102, the communication interface 108, the memory 110, the audio output system 112 with the audio output component A 114 and the audio output component B 116, and the haptic output device 118 of the electronic device 100 of FIG. 1 A. Therefore, detailed descriptions about the structures of the components of the electronic device 150 are omitted for brevity.

[0058] In an aspect the electronic device 150 may be a separate device from the electronic device 100. In another aspect, the electronic device 150 may be a part of the electronic device 100 and/or may have the same components as those of the electronic device 100.

[0059] The control circuit 152 of the electronic device 150 may obtain an audio signal including audio data and control information. As described above, the audio data may include one or more audio segments and the control information may include one or more haptic effect tags associated with the one or more audio segments. The one or more haptic effect tags may indicate (1) whether to generate a haptic effect for the one or more audio segments, and/or (2) at least one parameter of an algorithm for converting the one or more audio segments into the haptic effect. The haptic effect tags may further include at least description information of the one or more audio segments, haptic device information for the one or more audio segments indicating at least one of a type of haptic device to be used to render the haptic effect and a particular haptic device to be used to render the haptic effect, a source of the one or more audio segments, a haptic track associated with the one or more audio segments, information about a software application used to render the one or more audio segments, haptic effect time information indicating time for generating the haptic effect, and/or user information associated with the one or more audio segments.

[0060] In an aspect, the audio signal may include audio data and control information embedded or otherwise included with the audio data. The details on embedding the control information in the audio data are discussed in more detail above.

[0061] In an aspect, the control circuit 152 of the electronic device 150 may obtain, via the communication interface 158, the audio signal from the electronic device 100 that generates the audio signal. In another aspect, the control circuit 152 of the electronic device 150 may obtain, via the communication interface 158, the audio signal from a storage device or an intermediate device (e.g., a server or cloud server) that stores the audio signal from the electronic device 100100.

[0062] In an aspect, the control circuit 152, via an extraction component 154, may extract the control information from the audio signal. In an aspect, the control circuit 152, via an extraction component 154, may also extract the audio data from the audio signal.

[0063] The control circuit 152 may provide the audio data to render audio via the audio output system based on the one or more segments included in the audio data. The control circuit 152, via a haptic conversion component 156, may convert the audio data into a haptic signal to generate one or more haptic effects based on the one or more haptic effect tags, and may provide the haptic signal to generate the one or more haptic effects via a haptic device. In one aspect, the audio output system and/or the haptic device may exist within the electronic device 150, as the audio output system 162 and/or the haptic output device 168, respectively. In another aspect, the audio output system and/or the haptic device may exist as separate device(s) outside the electronic device 150. [0064] In an aspect, the electronic device 150 may be a user device or a playback device. In such an aspect, the control circuit 152 may extract the audio data and the control information and may convert the control information into the haptic signal at at least one of an application level, an application programming interface (API) level, a framework level, a hardware abstraction layer, a driver level, and a firmware level of the electronic device 150. For example, the extraction of the control information (and the audio data) and/or the conversion of the audio data into the haptic signal may be performed by the electronic device 150 at various layers of the audio stack of the electronic device 150, where the electronic device 150 may be the user device or the playback device. In other words, the extraction and/or the conversion may take place anywhere along the audio stack. For example, if the extraction is performed by a mobile device, the mobile device may intercept control information at the API level and/or the audio or haptic driver/firmware level.

[0065] Haptic effect tags in the control information consistent with embodiments herein may include one or more parameters. In an aspect, a haptic effect tag in the control information may include a flag indicating whether to provide a haptic effect for the one or more respective audio segments. In one example, for the one or more respective audio segments, a flag with one value (e.g., 1) would indicate to provide a haptic effect and may be referred to as an ON flag, and a flag with another value (e.g., 0) would indicate not to provide a haptic effect, and may be referred to as an OFF flag. In another example, presence of a flag may indicate to provide a haptic effect and absence of a flag may indicate not to provide a haptic effect, for the one or more respective audio segments.

[0066] The flag may be applied to the entirety of the audio data, or to one or more audio segments corresponding to the haptic effect tag. In another example, the flags may be configured to indicate times, within the audio data, during which the haptic conversion should start and stop. The flag location (i.e., time) in the audio data timeline may indicates the timing. For example, if the control circuit 152 receives one or more audio segments and determines that the control information associated with the one or more audio segments includes an ON flag, the control circuit 152 may render one or more haptic effects associated with the one or more audio segments, e.g. through conversion of the audio data of the audio segment into a haptic effect. By contrast, if the control circuit 152 receives one or more audio segments and determines that the control information associated with one or more audio segments includes an OFF flag, the control circuit 152 may determine not to provide a haptic effect for the one or more audio segments.

[0067] In an aspect, a haptic effect tag in the control information may include at least one parameter of the algorithm for converting the one or more respective audio segments to the haptic signal. For example, the at least one parameter may include an indication of an algorithm to use and/or specific parameters to use in connection with the algorithm, for converting the one or more respective audio segment to a portion of the haptic signal that corresponds with the one or more respective audio segment.

[0068] In an aspect, if a haptic effect tag includes a flag indicating to provide a haptic effect, the haptic effect tag may also include the at least one parameter of the algorithm. However, if a haptic effect tag includes a flag indicating not to provide a haptic effect, the haptic effect tag may not include the at least one parameter of the algorithm. In an aspect, if a haptic effect tag does not include any flag but includes the at least one parameter of the algorithm, the one or more respective audio segments may be converted to the haptic signal. On the other hand, if a haptic effect tag does not include any flag and also does not include any parameter of an algorithm, then no haptic conversion takes place for the one or more respective audio segments.

[0069] In an aspect, a haptic effect tag may include description information of the one or more respective audio segments. The description information may include high level description for the one or more respective audio segments, such as voice, gunshot, explosion, background music, etc. In an aspect, the conversion of the one or more respective audio segments to haptic signal may be based on the description information. For example, a rule may be set such that the description information tagging the one or more audio segments with“gunshot” may cause the control circuit 152 to convert the one or more audio segments into haptic effects based on the rule.

[0070] In an aspect, the description information may be one of the parameters of the algorithm for converting the one or more respective audio segments to the haptic signal. In an aspect, the description information may indicate whether to provide a haptic effect for a particular audio segment. For example, the description information“voice” may indicate not to provide a haptic effect, while description information including“gunshot” may indicate to provide a haptic effect. In some configurations, an algorithm or a look-up table may be used to indicate the associations between the algorithms/parameters and the description information. For example, audio-to-haptic algorithms (e.g., audio to vibe (A2V) algorithms) or look-up tables may map audio intensity levels to haptic intensity levels based on the description information. In the various configurations, the algorithm or the look-up table may be generated or pre-determined by the application designer or device manufacturer and may be provided to the electronic device 150 prior to receiving the audio data.

[0071] In an aspect, a haptic effect tag may include haptic device information for the one or more respective audio segments indicating at least one of a type of haptic device to be used to render the haptic effect and/or a particular haptic device to be used to render the haptic effect. For example, in multi-actuator configurations involving multiple haptic actuators, the control information may specify which specific actuator(s) to use for rendering the one or more generated haptic effects, for the respective one or more audio segments. The multi-actuator configurations may be used to provide spatialized haptic effects, as the multiple actuators may be positioned in particular locations to specify locations or directions of sound, for example.

[0072] In an aspect, a haptic effect tag may include a source of the one or more respective audio segments. In an aspect, a source of an audio segment may be a channel or a frequency range to which the audio segment belongs. For example, an audio segment with a voice that belongs to one channel may be converted to one haptic effect and an audio segment with a gunshot sound that belongs to another channel may be converted to another haptic effect. In other aspects, a source of an audio segment may be a sound or type of sound with which the audio segment is associated. For example, an audio segment with a gunshot may include many disparate frequency components, rather than a discrete range of frequencies.

[0073] In an aspect, a haptic tag may include information about a software application used to render the one or more respective audio segments. In an aspect, in a mobile device, the information about a software application may indicate an application (e.g., application(s) 130 of FIG. 1C) or application program interface (API) used to generate the audio segment (e.g., ringer, sound pool, phone, etc.). For example, when converting the audio data to the haptic signal, for an audio segment, one or more algorithms may determine whether to generate a haptic effect or how to generate one or more haptic effects based on the application associated with the audio segment using a look-up table.

[0074] A haptic tag may include a reference to, be indicative of, or include a haptic track associated with the one or more respective audio segments. The haptic track may include an authored haptic track or a haptic track that was converted from sensors or audio/video content is embedded in the audio signal. In an aspect, if a haptic tag for an audio segment indicates or includes a haptic track, conversion of the audio segment into a haptic signal may not be necessary and the mechanism extracting the control information may direct the haptic track to be provided as a part of the haptic signal. In some examples, the haptic track may include a reference to one or more stored haptic effects, a reference to one or more intended haptic output accelerations, and/or a haptic drive signal.

[0075] A haptic tag may include user information associated with the one or more audio segments. For example, the haptic effect tag may identify the user(s) associated with the one or more audio segments. In one example, each user may be associated with a particular channel, such that sound or voice from one user may be provided through the particular channel.

[0076] A haptic effect tag may include haptic effect time information indicating time for generating the haptic effect. For example, the haptic effect time information may identify one or more haptic rendering time periods for each haptic effect and/or a start time and a duration for each haptic effect.

[0077] In some configurations, the control information included with the audio signal enables a connection of haptic objects to virtual reality objects in a virtual reality application. For example, the haptic intensity or magnitude may be varied depending on the user’s distance from a virtual reality object that generates a sound (e.g., a grenade exploding). That is, for a closer distance between the user and the virtual reality object generating a sound, the haptic intensity or magnitude may be greater.

[0078] In an aspect, the electronic device 150 may be a user device with the control circuit 152 capable of rendering the audio based on the audio data via the audio output system associated with the user device and generating the one or more haptic effects by the haptic output device based on the haptic signal. The haptic output device may be associated with the user device. As discussed above, the audio output system and/or the haptic output device may exist within the electronic device 150 as the audio output device 162 and/or the haptic output device 168, respectively, or may exist separately from the electronic device 150.

[0079] In an aspect where the one or more audio segments include multiple audio segments, the control circuit 152 may associate one or more first audio segments of the multiple audio segments with a first audio output component (e.g., 164) of the audio output device and one or more second audio segments of the multiple audio segments with a second audio output component (e.g., 166) of the audio output device based on audio output channel information of the multiple audio segments. The one or more first audio segments may be associated with a first output channel and the one or more second audio segments may be associated with a second output channel. In such an aspect, the control circuit 152 may provide the audio data by providing the one or more first audio segments to the first audio output component and providing the one or more second audio segments to the second audio output component. Further, in such an aspect, the control information includes first control information associated with the first output channel and second control information associated with the second output channel, and the control circuit 152 may convert the audio data into the haptic signal by converting the one or more first audio segments into a first haptic signal of the haptic signal based on the first control information and converting the one or more second audio segments into a second haptic signal of the haptic signal based on the second control information. In addition, the control circuit 152 may provide the haptic signal by providing the first haptic signal to render a first portion of the one or more haptic effects via a first haptic output component of the haptic output device and providing the second haptic signal to render a second portion of the one or more haptic effects via a second haptic output component of the haptic output device.

[0080] For example, in a stereo audio case having two audio output channels for two respective speakers, each audio output channel used for its own audio may carry its own control information. Thus, an audio output channel for a first speaker (e.g., left speaker) used to render first audio segments may carry first control information for the first audio segment and a channel for a second speaker (e.g., right speaker) used to render second audio segments may carry second control information. In a use example, if there is a gunshot on the left side, the left speaker may play the gunshot and a haptic output component on the left side may vibrate, based on the first audio segment and the first control information. If there is a gunshot in front or if the sound is not directional, both the left and right speakers may play the sound and both haptic output components may vibrate on both sides. Although the above explanation is associated with a stereo audio case, similar concepts may be applied to scenarios with more than two audio output channels and more than two audio output components, such as a scenario involving the Dolby 5.1 sound system.

[0081] In an aspect, the electronic device 150 may be a remote device remote from the user device. One example of a remote device may be a device in cloud. In such an aspect, the control circuit 152 may extract the audio data and the control information from the audio signal and convert the audio data into the haptic signal according to the control information, and then may provide the audio data and the haptic signal to the user device to render the audio and to generate the one or more haptic effects by the user device.

[0082] In an aspect, the control information may be generated by an external device such as the electronic device 100 other than the electronic device 150. In such an aspect, obtaining the audio signal by the control circuit 152 may include receiving the audio signal from an external device, e.g., via the communication interface 158, where the control information is embedded in the audio data by the external device to generate the audio signal.

[0083] Additionally or alternatively, in an aspect, the control information may also be generated and included with the audio data in the audio signal by the electronic device 150. In such an aspect, obtaining the audio signal by the control circuit 152 may include generating the control information including the one or more haptic effect tags, and combining the control information with the audio data to generate the audio signal.

[0084] In an aspect, the control circuit 152 may determine whether the control information for the audio data is present. If the control information is not present, the control circuit 152 may generate the control information and include the control information in the audio signal with the audio data. In this aspect, for example, the control information generated by the control circuit 152 may be generated based on a type of the audio data, the API, and the application settings. If the control information already exists, the control circuit 152 may not generate the control information and may not include the control information with the audio data in the audio signal. [0085] In an aspect, the control circuit 152 may update the audio signal by adding additional information to the control information included with the audio signal to generate updated audio signal, where the control information provided to convert the audio data into the haptic signal is extracted from the updated audio signal. For example, the control circuit 152 may receive additional information to add to the audio signal as a part of the control information and may update the audio signal by adding the additional information. The updated control information with the additional information is then used to convert the audio data into a haptic signal. For example, the additional information may be information based on one or more of: policies by an original equipment manufacturer (OEM) of the electronic device 150, user preferences, a software application is playing the audio data, a type of the audio data, the API, and the application settings.

[0086] FIG. ID illustrates a block diagram showing an illustrative process for creating the audio signal by embedding the control information with the audio data in the audio signal and providing the audio data and the haptic signal for playback based on the audio signal, according to an embodiment hereof.

[0087] During a process 170 to generate the audio signal, control information 172 may be embedded with the audio data 174 to generate the audio signal 176. The control information 172 may be encoded with the audio data 174 as a collection of pulses ( e.g ., sine wave, square wave, etc.) of different magnitudes (e.g., 0 to 1) and frequencies so as to encode different parameters and messages. Here, the frequencies used may be in an inaudible range to humans (e.g, higher than 20 kHz, such as 22 kHz) or may be at a very low range (e.g, less than 20 Hz). In another example embodiment, control information 170 may be encoded as a pattern of“0” valued samples spaced with a specific number of non-zero audio samples (i.e., zeroing some samples in the audio signal). The number of null/”0” samples and the spaces between these may be used to encode different parameters/messages. In yet another example embodiment, control information 172 may be embedded along the least significant bits of the audio data samples. In each of these example embodiments, an encoded header may be used to describe the control information 172.

[0088] In another embodiment, control information 172 may be provided within audible frequencies of audio data 174. In this configuration, during a process 180 for providing the audio data and the haptic signal, the audible control information 172 may be filtered out during the process 180 such that the audio data 174 without the control information 172 is provided for audio playback 192.

[0089] During the process 180, the audio data 172 and the control information 174 are extracted from the audio signal 176, at 182, for example, by the extraction component 154. The audio data 172 extracted from the audio signal 176 is provided for the audio playback 192. The control information 174 extracted from the audio signal is used to convert the audio data 174 into a haptic signal, which is provided for haptic playback 194. Subsequently, during a process 190 for audio and haptic playback, audio is played based on the audio data 172 via the audio playback 192 and a haptic effect is generated based on the haptic signal via the haptic playback 194.

[0090] The process 170 may be performed by the electronic device 100, which may be a device used by a content provider or a content designer to provide the audio data. Additionally or alternatively, the process 170 may be performed by the electronic device 150, which may be a user device or an intermediate device capable of providing the audio data and the haptic signal for audio and haptic playback based on the audio signal.

[0091] The process 180 may be performed by any device interacting with an audio output system and a haptic output device. The process 180 may be performed by the electronic device 150 which may be a user device or an intermediate device capable of providing the audio data and the haptic signal. Additionally or alternatively, the process 180 may be performed by the electronic device 100. For example, the process 180 may be performed at a server/cloud service device that receives the audio signal 176, which then sends the audio data and the haptic signal to a playback device ( e.g ., mobile device, gaming console, AR/VR engine, etc.). In another example embodiment, the process 180 may be performed by a mobile device, which then sends the audio data and the haptic signal to a playback device such as a headset or a speaker.

[0092] The process 190 may be performed by the electronic device 150 and/or the electronic device 100. Additionally or alternatively, the process 190 may be performed by a playback device separate from the electronic device 150 and the electronic device 100.

[0093] FIGS. 2A and 2B illustrate diagrams showing various interactions between electronic devices, according to various embodiments hereof. FIG. 2A illustrates a diagram showing an interaction between an external device 200 and a user device 250, according to an embodiment hereof. The external device 200 may be similar to the electronic device 100 of FIG. 1 A, and thus structures of the control circuit 202, the communication interface 208, and the memory 210 of the external device 200 may be the same as or similar to those of the control circuit 102, the communication interface 108, and the memory 110 of the electronic device 100, respectively. Further, the user device 250 may be similar to the electronic device 150 of FIG. IB, and thus structures of a control circuit 252, a communication interface 258, the memory 260, an audio output system 262, and a haptic output device 268 of the external device 200 may be the same as or similar to those of the control circuit 152, the communication interface 158, the memory 160, the audio output system 162, and the haptic output device 168 of the electronic device 100, respectively. Therefore, detailed descriptions about the structures of the components of the external device 200 and the user device 250 are omitted for brevity.

[0094] In FIG. 2A, the control circuit 202 of the external device 200 generates control information and creates the audio signal by including the control information with the audio data, where the audio signal may be stored in the memory 210. Then, the control circuit 202 provides the audio signal to the user device 250 via the communication interface 208. For example, the external device may be a device used by a content provider or a content designer to provide the audio data another device such as the user device 250.

[0095] The control circuit 252 of the user device 250 receives the audio signal including the audio data and the control information, via the communication interface 258, and extracts the audio data and the control information from the audio signal . The control circuit 252 provides the audio data to render audio via the audio output system 262. The control circuit 252 also converts the audio data into a haptic signal based on the control information and provides the haptic signal to generate haptic effects via the haptic output device 268.

[0096] FIG. 2B illustrates a diagram showing an interaction among the external device 200, in intermediate device 230, and a playback device 280. The external device 200 of FIG. 2B may be the same as the external device 200 of FIG. 2 A. In FIG. 2B, the intermediate device 230 may be similar to the electronic device 150 of FIG. IB, and thus structures of a control circuit 232, a communication interface 238, and the memory 240 of the intermediate device 230 may be the same as or similar to those of the control circuit 152, the communication interface 158, and the memory 160 of the electronic device 100, respectively. Therefore, detailed descriptions about the structures of the components of the external device 200 and the intermediate device 230 are omitted for brevity.

[0097] The playback device 280 may include a control circuit 282, a communication interface 284, a memory 286, an audio output system 288, and a haptic output device 290. The playback device 280 may be configured to render audio via the audio output system 288 and to generate a haptic effect via the haptic output device 290.

[0098] In FIG. 2B, the control circuit 202 of the external device 200 generates the audio signal by generating control information for the audio data and including the control information in the audio signal with the audio data. The audio signal may be stored in the memory 210. Then, the control circuit 202 provides the audio signal to the user device 250 via the communication interface 208.

[0099] The control circuit 232 of the intermediate device 230 receives the audio signal including the audio data and the control information, via the communication interface 238, and extracts the audio data and the control information from the audio signal . The control circuit 232 provides the audio data to the playback device 280 via the communication interface 238. The control circuit 252 also converts the audio data into a haptic signal based on the control information and provides the haptic signal to the playback device 280 via the communication interface 238. Thus, in FIG. 2B, although the intermediate device 230 extracts the audio data and converts the audio data into the haptic signal, the intermediate device 230 does not render audio or generate a haptic effect, but instead relies on another device to render audio and generate a haptic effect. The intermediate device 230 may be a device in cloud or may be a media/game system connected to the playback device 280.

[00100] The control circuit 282 of the playback device 280 may receive the audio data and the haptic signal for the audio data from the intermediate device 230, via the communication interface 284. Subsequently, the control circuit 282 may provide the audio data to the audio output system 288 to render audio and may provide the haptic signal to the haptic output device 290 to generate haptic effects. [00101] FIG. 4 is a diagram of a haptically enabled system/device 400, according to an example embodiment hereof.

[00102] The system 400 includes a touch sensitive surface 11, such as a touchscreen, or other type of user interface mounted within a housing 15 and may include mechanical keys/buttons 13 and a speaker 28. Internal to the system 400 is a haptic feedback system that generates haptic effects on the system 400 and includes a processor 12. The processor 12 may a part of the control circuit 102 or the control circuit 152. Coupled to the processor 12 are a memory 20 and a haptic drive circuit 16 which is coupled to a haptic actuator 18 or other haptic output device.

[00103] The processor 12 may be configured to include control information including one or more haptic effect tags with audio data to generate an audio signal. In addition, the processor 12 may be configured to determine which haptic effects are rendered and the order in which the haptic effects are rendered based on one or more haptic effect tags that are included with the audio data. The haptic effect tag may identify, for example, the user(s) associated with the audio data and/or the type of audio data. In addition, the haptic effect tags may identify one or more haptic rendering time periods for each haptic effect as well as the start time and duration for each haptic effect. Additional high level parameters, such as the magnitude, frequency, and type of haptic effect may also be specified. A haptic effect may be considered“dynamic” if it includes some variation of these parameters when the haptic effect is rendered or a variation of these parameters based on a user’s interaction. Examples of such dynamic effects include ramp-up, ramp-down, spatial, and other haptic effects. The haptic feedback system, in one embodiment, generates vibrations 30, 31 or other types of haptic effects on the system 400.

[00104] The processor 12 outputs the control signals to the haptic drive circuit 16, which includes electronic components and circuitry used to supply the haptic actuator 18 with the required electrical current and voltage (i.e.,“motor signals”) to render the desired haptic effects. The haptic actuator 18 may be the haptic output device 118 or the haptic output device 168. The system 400 may include more than one haptic actuator 18 or other haptic output devices, and each actuator may include a separate haptic drive circuit 16, all coupled to a common processor 12.

[00105] The haptic drive circuit 16 is configured to drive the haptic actuator 18. For example, the haptic drive circuit 16 may attenuate the haptic drive signal at and around the resonance frequency (e.g., +/- 20Hz, 30Hz, 40Hz, etc.) of the haptic actuator 18. In certain embodiments, the haptic drive circuit 16 may comprise a variety of signal processing stages, each stage defining a subset of the signal processing stages applied to modify the haptic drive signal.

[00106] The processor 12 may be any type of general purpose processor, or could be a processor specifically designed to provide haptic effects, such as an application-specific integrated circuit (“ASIC”). The processor 12 may be the same processor that operates the entire the system 400, or may be a separate processor.

[00107] The memory 20 may include a variety of computer-readable media that may be accessed by the processor 12. The memory 20 may be the memory 110 or the memory 160. In the various embodiments, the memory 20 and other memory devices described herein may include a volatile and nonvolatile medium, removable and non-removable medium. For example, the memory 20 may include any combination of random access memory (“RAM”), dynamic RAM (“DRAM”), static RAM (“SRAM”), read only memory (“ROM”), flash memory, cache memory, and/or any other type of non-transitory computer-readable medium. The memory 20 stores instructions executed by the processor 12. Among the instructions, the memory 20 includes a media haptic simulation module 22, which are instructions that, when executed by the processor 12, generate the haptic effects using the haptic actuator 18 in combination with a touch sensitive surface 11 and/or a speaker 28, and by encoding haptic effects as discussed below. The memory 20 may also be located internal to the processor 12, or any combination of internal and external memory.

[00108] The haptic actuator 18 may be any type of actuator or haptic output device that may generate a haptic effect. In general, an actuator is an example of a haptic output device, where a haptic output device is a device configured to output haptic effects, such as vibrotactile haptic effects, electrostatic friction haptic effects, temperature variation, and/or deformation haptic effects, in response to a drive signal. Although the term actuator may be used throughout the detailed description, the embodiments of the invention may be readily applied to a variety of haptic output devices. The haptic actuator 18 may be, for example, an electric motor, an electro-magnetic actuator, a voice coil, a shape memory alloy, an electro-active polymer, a solenoid, an eccentric rotating mass motor (“ERM”), a harmonic ERM motor (“HERM”), a linear resonance actuator (“LRA”), a solenoid resonance actuator (“SRA”), a piezoelectric actuator, a macro fiber composite (“MFC”) actuator, a high bandwidth actuator, an electroactive polymer (“EAP”) actuator, an electrostatic friction display, an ultrasonic vibration generator, or the like. In some instances, the actuator itself may include a haptic drive circuit.

[00109] In addition to, or in place of, the haptic actuator 18, the system 400 may include other types of haptic output devices (not shown) that may be non-mechanical or non-vibratory devices such as devices that use electrostatic friction (“ESF”), ultrasonic surface friction (“USF”), devices that induce acoustic radiation pressure with an ultrasonic haptic transducer, devices that use a haptic substrate and a flexible or deformable surface or shape changing devices and that may be attached to a user’s body, devices that provide thermal haptic effects (e.g., thermal effect generators), devices that provide projected haptic output such as a puff of air using an air jet, etc.

[00110] In general, an actuator may be characterized as a standard definition (“SD”) actuator that generates vibratory haptic effects at a single frequency. Examples of an SD actuator include ERM and LRA. By contrast to an SD actuator, a high definition (“HD”) actuator or high fidelity actuator such as a piezoelectric actuator or an EAP actuator is capable of generating high bandwidth/defmition haptic effects at multiple frequencies. HD actuators are characterized by their ability to produce wide bandwidth tactile effects with variable amplitude and with a fast response to transient drive signals.

[00111] The system 400 may be any type of electronic device, such as a cellular telephone, personal digital assistant (“PDA”), smartphone, computer tablet, gaming console, remote control, or any other type of device that includes a haptic effect system that includes one or more actuators. The system 400 may include wearable devices such as wrist bands, headbands, eyeglasses, rings, leg bands, arrays integrated into clothing, etc., or any other type of device that a user may wear on a body or may be held by a user and that is haptically enabled, including furniture or a vehicle steering wheel. Further, some of the elements or functionality of the system 400 may be remotely located or may be implemented by another device that is in communication with the remaining elements of the system 400.

[00112] FIGS. 5A-5F illustrate various examples for generating haptic effect tags for the control information, according to various embodiments hereof. FIG. 5A illustrates an example where audio data is divided into multiple audio segments having the same length in time and the audio data includes sounds from three different sources that do not overlap with each other in time. The three different sources may be three different sound origins from which sounds originate. In an aspect, the sound origins may represent where the sounds are originated within the audio data or how the sounds are generated. For example, a source for the voice may be a microphone capturing a user voice, a source for a gunshot sound may be a trigger button on a user device, and a source for an explosion sound may be a software application automatically playing the explosion sound. In another aspect, the sound origins may represent devices or components from which the sounds originate. In further aspects, the sources or sound origins may be representative of the origin of the sound within the context of the audio data. For example, the sounds associated with footsteps within the audio data, the sounds associated with a gunshot within the audio data, the sounds associated with a voice within the audio data, and/or the sounds associated with music within the audio data. Thus, in embodiments, a source or sound origin does not require a physical origin, but may refer to the origin or source of the sound within the context of the audio data. As illustrated in FIG. 5 A, for Time 1 and Time 2, the audio data includes audio segments with a first voice sound from a first source. For Time 3, the audio data includes an audio segment with an explosion sound from a second source. For Time 4, Time 5, and Time 6, the audio data includes audio segments with a second voice from the first source. For Time 7, the audio data includes an audio segment with a gunshot sound from the third source.

[00113] FIG. 5B illustrates a table showing the audio segments of the audio data from FIG. 5 A being classified into three groups. Group 1, Group 2, and Group 3 respectively correspond with the first source, the second source, and the third source. As shown in FIG. 5B, the audio segments including the first voice and the second voice from the first source are classified as Group 1. The audio segment including the explosion sound from the second source is classified as Group 2. The audio segment including the gunshot sound from the third source is classified as Group 3.

[00114] FIG. 5C illustrates an example where audio data is divided into multiple audio segments having the same length in time and the audio data includes sounds from three different sources that partially overlap with each other in time. As illustrated in FIG. 5B, for Time 1 and Time 2, the audio data includes audio segments with a first voice sound from a first source. For Time 2, Time 3, Time 4, and Time 5, the audio data includes audio segments with an explosion sound from a second source. For Time 4, Time 5, and Time 6, the audio data includes audio segments with a second voice from the first source. For Time 4, Time 5, Time 6, and Time 7, the audio data includes audio segments with a gunshot sound from the third source. Thus, the explosion sound partially overlaps with the first voice and the second voice, and the gunshot sound partially overlaps with the second voice.

[00115] FIG. 5D illustrates a table showing the audio segments of the audio data from FIG. 5C being classified into three groups. Group 1, Group 2, and Group 3 respectively correspond with the first source, the second source, and the third source. As shown in FIG. 5C, the audio segments including the first voice and the second voice from the first source are classified as Group 1. The audio segment including the explosion sound from the second source is classified as Group 2. The audio segment including the gunshot sound from the third source is classified as Group 3. For Time 2, because the explosion sound from the second source partially overlaps with the first voice from the first source, two audio segments from the two sources are present in Time 2. For Time 4 and Time 5, because the explosion sound from the second source partially overlaps with the second voice from the first source and the gunshot sound from the third source, the audio segments from the three sources are present in each of Time 4 and Time 5. For Time 6, because the gunshot sound from the third source partially overlaps with the second voice from the first source, two audio segments from the two sources are present in Time 6.

[00116] FIG. 5E illustrates an example where audio data is divided into multiple audio segments having different lengths in time and the audio data includes sounds from three different sources that partially overlap with each other in time. In FIG. 5E, the audio data is divided into the multiple audio segments based on presence of a particular sound at a particular source. Hence, there is one audio segment for each time window. As shown in FIG. 5E, for Time 1, the audio data includes an audio segment with a first voice from the first source. For Time 2, the audio data includes an audio segment with an explosion sound from the second source. For Time 3, the audio data includes an audio segment with a second voice from the first source. For Time 4, the audio data includes an audio segment with a gunshot sound from the fourth source. As shown in FIG. 5E, Time 1 partially overlaps with Time 2, Time 2 partially overlaps with Time 3 and Time 4, and Time 3 partially overlaps with Time 4. [00117] FIG. 5F illustrates a table showing the audio segments of the audio data from FIG. 5E being classified into three groups. Group 1, Group 2, and Group 3 respectively correspond with the first source, the second source, and the third source. As shown in FIG. 5E, the audio segments including the first voice and the second voice from the first source are classified as Group 1. The audio segment including the explosion sound from the second source is classified as Group 2. The audio segment including the gunshot sound from the third source is classified as Group 3.

[00118] In the above examples, each of Group 1, Group 2, and Group 3 may be associated with a respective haptic tag included in the control information. That is, when the control information is generated, a first haptic effect tag may be associated with the audio segments in Group 1, a second haptic effect tag may be associated with the audio segment in Group 2, and the third haptic effect tag may be associated with the audio segment in Group 3. Each haptic effect tag may indicate a time and/or a source corresponding to a respective audio segment, which may be used to convert the respective audio segment into a haptic signal.

[00119] FIG. 6 illustrates an architecture 600 used to provide audio and haptic playback, according to an embodiment hereof. The architecture may include a content creator 610, an operating system 620, an integrated circuit (IC) provider 630, and an audio/haptic IC 640. The operating system 620, the IC provider 630, and the audio/haptic IC 640 may reside within the same device. The content creator 610 may reside within the same device as the operating system 620, the IC provider 630, and the audio/haptic IC 640, or may reside in a separate device. The conversion of the audio data into a haptic signal may take place at one or more of the operating system 620, the IC provider 630, and the audio/haptic IC 640, as discussed below.

[00120] The content creator 610 may provide audio data. The content creator 610 may provide the audio data in an audio signal having control information included with audio data. The operating system 620 may be capable of executing a software application that requests playback of the audio data. The operating system 620 may determine whether control information for the audio data is present. In an aspect, if the control information is present, the operating system 620 may generate and include control information with the audio data based on an audio stream type, an API, and/or application settings. Additional control information may be included with the audio data in the audio signal by the operating system 620 based on policies by an original equipment manufacturer (OEM) of the device and/or based on user preferences and/or based on what software application is playing the audio data. For example, the user preferences included as the additional control information may indicate which software application should provide haptics for audio playback and which software application should not provide haptics. In an aspect, if the control information is present, the operating system 620 may convert the audio data to a haptic signal based on the control information, or otherwise may convert the audio data to a haptic signal based on a default setting or based on an audio stream type, an API, and/or application settings.

[00121] The IC provider 630 is a software layer including drivers for providing output to the audio/haptic IC 640. The IC provider includes an audio driver for driving an audio output system and a haptic driver for driving a haptic output device. In an aspect, if the control information is present, the IC provider 630 may convert the audio data to a haptic signal based on the control information, or otherwise may convert the audio data to a haptic signal based on a default setting.

[00122] The audio/haptic IC 640 processes the audio data and a haptic signal and then generates audio output and haptic effects. The audio/haptic IC 640 may be a hardware component and may include one or more software/hardware modules. In an aspect, if the control information is present, the audio/haptic IC 640 may convert the audio data to a haptic signal based on the control information, or otherwise may convert the audio data to a haptic signal based on a default setting.

[00123] FIGS. 7A-7D are various examples of the architecture 600 of FIG. 6 for a device running an operating system, according to various embodiments hereof. As illustrated in FIGS. 7A-7D, each of architectures 700A, 700B, 700C, and 700D in FIGS. 7A-7D includes a content creator 710, an operating system 720, an IC provider 730, and an audio/haptic IC 740, which are examples of the content creator 610, the operating system 620, the IC provider 630, and the audio/haptic IC 640 of FIG. 6. The example architectures 700A, 700B, 700C, and 700D may be readily utilized in Android™ devices, each running an Android™ operating system. However, the embodiments of the invention are not limited to any particular device type and may be readily applied to a variety of mobile and other electronic devices. As shown in FIGS. 7A-7D, the content creator 710 provides the audio data and may embed the control information in the audio data. The operating system 720 includes an application layer, an Audio API layer, a Native AudioTrack layer, and an AudioFlinger Service layer. In some configurations, the Native AudioTrack layer and the AudioFlinger Services layer may be considered as a part of the Audio API layer. The IC provider 730 includes an Audio Hardware Abstraction Layer (HAL) and an Audio Driver.

[00124] In FIG. 7A, the audio-to-haptic conversion may utilize the control information at the audio/haptic IC 740. By including the control information with the audio data, as a part of the audio signal, the control information makes its way down the audio stack architecture 700A as the audio signal, to the audio/haptic IC 740 implementing areal time audio to haptic conversion algorithm. The control information may be extracted at the audio/haptic IC 740, and then the audio data may be converted into a haptic signal according to the control information to be sent to the haptic output device. Alternatively, the control information may be sent directly to a haptic output device.

[00125] In particular, as shown in FIG. 7A, at the Native AudioTrack layer, the device determines whether the control information (Cl) is missing for the audio data from the content creator 710. If the control information is missing, the device may optionally generate and include control information with the audio data based on at least one of a type of the audio data, the API, and the application setting. The inclusion of the control information with the audio data in an audio signal at the Native AudioTrack layer may be an optional feature for the OEM. Further, in FIG. 7A, at the audio/haptic IC 740, if the control information for the audio data is present, the device may convert the audio data to a haptic signal based on the control information. If the control information for the audio data is not present, at the audio/haptic IC 740, the device may convert the audio data to a haptic signal based on a default setting.

[00126] In FIG. 7B, the audio-to-haptic conversion may utilize the control information at the AudioFlinger Service layer. In particular, as shown in FIG. 7B, at the Native AudioTrack layer, the device determines whether the control information is present for the audio data from the content creator 710. If the control information is not present, the device may optionally generate and include control information with the audio data in an audio signal based on at least one of a type of the audio data, the API, and the application setting. The inclusion of the control information with the audio data at the Native AudioTrack layer may be an optional feature for the OEM. Further, in FIG. 7B, at the AudioFlinger Service layer, if the control information for the audio data is present, the device may convert the audio data to a haptic signal based on the control information. If the control information for the audio data is not present, at the AudioFlinger Service layer, the device may convert the audio data to a haptic signal based on a default setting.

[00127] In FIG. 7C, the audio-to-haptic conversion may utilize the control information at the Native AudioTrack layer. In particular, as shown in FIG. 7C, at the Native AudioTrack layer, the device determines whether the control information is present for the audio data from the content creator 710. If the control information is present, the device may convert the audio data to a haptic signal based on the control information. If the control information for the audio data is not present, at the Native AudioTrack layer, the device may convert the audio data to a haptic signal based on at least one of a type of the audio data, the API, and the application setting.

[00128] In FIG. 7D, the audio-to-haptic conversion may utilize the control information at the Audio HAL. In particular, as shown in FIG. 7D, at the Native AudioTrack layer, the device determines whether the control information is missing for the audio data from the content creator 710. If the control information is present, the device may optionally generate and include control information with the audio data in an audio signal based on at least one of a type of the audio data, the API, and the application setting. The inclusion of the control information in the audio data at the Native AudioTrack layer may be an optional feature for the OEM. Further, in FIG. 7D, at the Audio HAL, if the control information for the audio data is present, the device may convert the audio data to a haptic signal based on the control information. If the control information for the audio data is not present, at the Audio HAL, the device may convert the audio data to a haptic signal based on a default setting.

[00129] FIG. 8 depicts a flow diagram for a method 800 of providing audio data with haptic control information. The method 800 may be performed by the control circuit 102 of the electronic device 100. The method 800 may also be performed by the control circuit 152 of the electronic device 150. The method 800 may further be performed by the processor 12 associated with the system 400.

[00130] In step 801, the control circuit 102 segments the audio data into one or more audio segments, as described herein. [00131] In step 803, the control circuit 102 generates control information for the one or more audio segments. The control information includes one or more haptic effect tags associated with the one or more audio segments. The one or more haptic effect tags indicate at least one of (1) whether to generate a haptic effect for the one or more audio segments, and (2) at least one parameter of an algorithm for converting the one or more audio segments into the haptic effect.

[00132] In step 805, the control circuit 102 provides an audio signal including the audio data and the control information to render the audio data and the one or more haptic effects based on the one or more audio segments.

[00133] FIG. 9 depicts a flow diagram for a method 900 of providing control information for haptic effects.

[00134] In step 901, the control circuit 152 obtains an audio signal including audio data and control information. The audio signal includes one or more audio segments and the control information includes one or more haptic effect tags. The one or more haptic effect tags indicate at least one of (1) whether to generate a haptic effect for the one or more audio segments, and (2) at least one parameter of an algorithm for converting the one or more audio segments into the haptic effect.

[00135] In step 903, the control circuit 152 provides the audio data to render audio via an audio output system based on the one or more audio segments of the audio data.

[00136] In step 905, the control circuit 152 converts the audio data into a haptic signal to generate one or more haptic effects based on the control information. The control information may include one or more haptic effect tags.

[00137] In step 907, the control circuit 152 provides the haptic signal to generate the one or more haptic effects via a haptic device.

[00138] Additional Discussion of Various Embodiments

[00139] Embodiment 1 of the present disclosure relates to a method of providing information for haptic effects. The method comprises obtaining, by at least one processor, an audio signal including audio data and control information, the audio signal including one or more audio segments and the control information including one or more haptic effect tags indicating at least one of (1) whether to generate a haptic effect for the one or more audio segments, and (2) at least one parameter of an algorithm for converting the one or more audio segments into the haptic effect. The method further comprises providing the audio data to render audio via an audio output system based on the one or more audio segments. The method further comprises converting the audio data into a haptic signal to generate one or more haptic effects based on the one or more haptic effect tags and providing the haptic signal to generate the one or more haptic effects via a haptic device.

[00140] Embodiment 2 includes the method of embodiment 1, where the one or more audio segments in the audio data includes a plurality of audio segments and the one or more haptic tags in the control information includes a plurality of haptic effect tags associated with the plurality of audio segments.

[00141] Embodiment 3 includes the method of embodiment 2, where each audio segment of the plurality of audio segments is associated with a respective audio group of a plurality of audio groups based on one or more characteristics of each audio segment, and where the plurality of haptic effect tags are respectively associated with the plurality of audio groups.

[00142] Embodiment 4 includes the method of any one of embodiments 1-3, the audio signal is encoded audio data including the control information embedded in the audio data.

[00143] Embodiment 5 includes the method of any one of embodiments 1-4, where obtaining the audio signal comprises: generating the control information including the one or more haptic effect tags, and combining the control information with the audio data to generate the audio signal.

[00144] Embodiment 6 includes the method of embodiment 5, where obtaining the audio signal further comprises: determining whether the control information for the audio data is present, where generating the control information and embedding the control information in the audio data are performed when the control information is not present.

[00145] Embodiment 7 includes the method of any one of embodiments 1-4, where obtaining the audio signal comprises: receiving the audio signal from an external device, where the control information is included with the audio data in the audio signal by the external device. [00146] Embodiment 8 includes the method of any one of embodiments 1-7, where converting the control information into the haptic signal is performed by the at least one processor in a remote device remote from a user device, and wherein the audio data and the haptic signal are provided, by the at least one processor, to the user device to render the audio and to generate the one or more haptic effects.

[00147] Embodiment 9 includes the method of any one of embodiments 1-8, where extracting the audio data and the control information and converting the control information into the haptic signal are performed by the at least one processor at at least one of an application level, an application programming interface level, a framework level, a hardware abstraction layer, a driver level, and a firmware level of a user device.

[00148] Embodiment 10 includes the method of any one of embodiments 1-9, where the method further comprises: updating the audio signal by adding additional information to the control information embedded in the audio signal to generate an updated audio signal, where the control information provided to convert the audio data into the haptic signal is extracted from the updated audio signal.

[00149] Embodiment 11 includes the method of any one of embodiments 1-7 and 9-10, where converting the audio data into the haptic signal is performed by the at least one processor in a user device, and wherein the method further comprises: rendering the audio based on the audio data via the audio output system associated with the user device, and generating the one or more haptic effects by the haptic device based on the haptic signal, the haptic device being associated with the user device.

[00150] Embodiment 12 includes the method of any one of embodiments 1-11, where each of the one or more haptic effect tags includes one or more parameters including at least one of: description information of the one or more audio segments, haptic device information for the one or more audio segments indicating at least one of a type of haptic device to be used to render the haptic effect and a particular haptic device to be used to render the haptic effect, a source of the one or more audio segments, a haptic track associated with the one or more audio segments, information about a software application used to render the one or more audio segments, haptic effect time information indicating time for generating the haptic effect, and user information associated with the one or more audio segments.

[00151] Embodiment 13 includes the method of any one of embodiments 1-12, where the one or more audio segments include a plurality of audio segments. The method further comprises associating one or more first audio segments of the plurality of audio segments with a with a first output audio channel and one or more second audio segments of the plurality of audio segments with a second output audio channel, where providing the audio data comprises outputting the one or more first audio segments via the first output audio channel and outputting the one or more second audio segments via the second output audio channel, the control information includes first control information associated with the first output channel and second control information associated with the second output channel, converting the audio data into the haptic signal includes converting the one or more first audio segments into a first haptic signal based on the first control information and converting the one or more second audio segments into a second haptic signal based on the second control information, and providing the haptic signal includes providing the first haptic signal to render a first portion of the one or more haptic effects via a first haptic output component of the haptic output device and providing the second haptic signal to render a second portion of the one or more haptic effects via a second haptic output component of the haptic output device.

[00152] Embodiment 14 includes the method of any one of embodiments 1-13, where the control information is included in the audio data as a plurality of pulses having different magnitudes and different frequencies to indicate different parameters for the one or more audio segments.

[00153] Embodiment 15 includes the method of any one of embodiments 1-14, where each of the one or more audio segments has a same time duration.

[00154] Embodiment 16 includes the method of any one of embodiments 1-15, where at least two of the one or more audio segments at least partially overlap in time.

[00155] Embodiment 17 of the present disclosure relates to a method of providing audio data with haptic information. The method comprises segmenting, by at least one processor, the audio data into one or more audio segments. The method comprises generating control information for the one or more audio segments, the control information including one or more haptic effect tags associated with the one or more audio segments, the one or more haptic effect tags indicating at least one of (1) whether to generate a haptic effect for the one or more audio segments, and (2) at least one parameter of an algorithm for converting the one or more audio segments into the haptic effect. The method comprises providing an audio signal including the audio data and the control information to render the audio data and the one or more haptic effects based on the one or more audio segments.

[00156] Embodiment 18 includes the method of embodiment 17, where the one or more audio segments in the audio data includes a plurality of audio segments and the one or more haptic tags in the control information includes a plurality of haptic effect tags associated with the plurality of audio segments.

[00157] Embodiment 19 includes the method of embodiments 18, where generating the control information comprises: associating each of the plurality of audio segments with a respective audio group of a plurality of audio groups based on one or more characteristics of each audio segment; and associating the plurality of haptic effect tags respectively with the plurality of audio groups.

[00158] Embodiment 20 includes the method of embodiment 19, where the plurality of audio groups includes a voice group for audio samples having a human voice, a non-voice group for audio samples without a human voice, and a silence group for audio samples with no sound.

[00159] Embodiment 21 includes the method of embodiment 20, where at least one audio segment of the one or more audio segments that matches with the voice group based on the audio- to-haptics model is associated with a haptic effect tag indicating that no haptic effect for the at least one audio segment is to be rendered.

[00160] Embodiment 22 includes the method of any one of embodiments 17-21, where the method further comprises generating the audio signal by combining the control information with the audio data.

[00161] Embodiment 23 includes the method of any one of embodiments 17-22, where generating the control information comprises applying an audio-to-haptics model to the audio data to associate one or more haptic effect tags with the one or more audio segments, the audio-to- haptics model being trained based on known associations between a plurality of audio samples and a plurality of sound types.

[00162] Embodiment 24 includes the method of any one of embodiments 17-23, where each of the one or more haptic effect tags includes one or more parameters including at least one of: description information of the one or more audio segments, haptic device information for the one or more audio segments indicating at least one of a type of haptic device to be used to render the haptic effect and a particular haptic device to be used to render the haptic effect, haptic effect time information indicating time for generating the haptic effect, a source of the one or more audio segments, a haptic track associated with the one or more audio segments, information about a software application used to render the one or more audio segments, haptic effect time information indicating time for generating the haptic effect, and user information associated with the one or more audio segments.

[00163] Embodiment 25 includes the method of any one of embodiments 17-24, where the control information is included with the audio data as a plurality of pulses having different magnitudes and different frequencies to indicate different parameters for the one or more audio segments.

[00164] Embodiment 26 includes the method of any one of embodiments 17-25, where each of the one or more audio segments has a same time duration.

[00165] While various embodiments have been described above, it should be understood that they have been presented only as illustrations and examples of the present invention, and not by way of limitation. It will be apparent to persons skilled in the relevant art that various changes in form and detail can be made therein without departing from the spirit and scope of the invention. Thus, the breadth and scope of the present invention should not be limited by any of the above- described exemplary embodiments, but should be defined only in accordance with the appended claims and their equivalents. It will also be understood that each feature of each embodiment discussed herein, and of each reference cited herein, can be used in combination with the features of any other embodiment. All patents and publications discussed herein are incorporated by reference herein in their entirety.