Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD FOR OUTPUTTING A MODIFIED AUDIO SIGNAL AND GRAPHICAL USER INTERFACES PRODUCED BY AN APPLICATION PROGRAM
Document Type and Number:
WIPO Patent Application WO/2014/081384
Kind Code:
A1
Abstract:
According to various embodiments, a method for outputting a modified audio signal may be provided. The method may include: receiving from a user an input indicating an angle; determining a parameter for a head-related transfer function based on the received input indicating the angle; modifying an audio signal in accordance with the head-related transfer function based on the determined parameter; and outputting the modified audio signal.

Inventors:
TAN MIN-LIANG (SG)
Application Number:
PCT/SG2012/000439
Publication Date:
May 30, 2014
Filing Date:
November 22, 2012
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
RAZER ASIA PACIFIC PTE LTD (SG)
International Classes:
H04R5/00; G06F3/048; H04S1/00
Foreign References:
US20120201405A12012-08-09
US20060280323A12006-12-14
US20120093348A12012-04-19
US20120022842A12012-01-26
KR20080055622A2008-06-19
Other References:
See also references of EP 2923500A4
Attorney, Agent or Firm:
VIERING, JENTSCHURA & PARTNER LLP (Rochor Post OfficeRochor Road, Singapore 3, SG)
Download PDF:
Claims:
CLAIMS

1. A method for outputting a modified audio signal, the method comprising:

receiving from a user an input indicating an angle;

determining a parameter for a head-related transfer function based on the received input indicating the angle;

modifying an audio signal in accordance with the head-related transfer function based on the determined parameter; and

outputting the modified audio signal.

2. The method of claim 1 ,

wherein the input indicating the angle is a graphical input indicating the angle by a point on a geometric shape.

3. The method of claim 1,

wherein the input indicating the angle is a graphical input indicating the angle by a direction from a center of a geometric shape.

4. The method of claim 1, further comprising:

receiving from the user an input indicating a head size of the user.

5. The method of claim 4,

wherein the parameter for the head-related transfer function is determined further based on the received input indicating the head size of the user.

6. The method of claim 1 , further comprising:

receiving from the user an input indicating a head shape of the user.

7. The method of claim 6, wherein the parameter for the head-related transfer function is determined further based on the received input indicating the head shape of the user.

8. The method of claim 1, further comprising:

receiving from the user an input indicating an ear size of the user.

9. The method of claim 8,

wherein the parameter for the head-related transfer function is determined further based on the received input indicating the ear size of the user.

10. The method of claim 1, further comprising:

receiving from the user an input indicating an ear shape of the user.

11. The method of claim 10,

wherein the parameter for the head-related transfer function is determined further based on the received input indicating the ear shape of the user.

12. The method of claim 1,

wherein the receiving and the determining are performed for a plurality of virtual speaker positions.

13. The method of claim 1, further comprising:

sending the determined parameter to a server in a cloud.

14. The method of claim 1, further comprising:

receiving a parameter for the head-related transfer function from a server in a cloud;

modifying the audio signal in accordance with the head-related transfer function based on the received parameter; and

outputting the modified audio signal.

15. A graphical user interface produced by an application program, the graphical user interface comprising:

an application program window generated by the application program, wherein the application program window comprises:

a visual representation of the user;

a visual representation of a speaker on a geometric shape around the user; and an input for inputting an indication of an angle on the geometric shape with respect to the visual representation of the speaker.

16. The graphical user interface of claim 15,

wherein the input comprises a marker configured to be moved on the geometric shape.

17. The graphical user interface of claim 15,

wherein the input comprises the visual representation of the speaker configured to be moved on the geometric shape.

18. The graphical user interface of claim 15,

wherein the input comprises a needle of a compass configured to be moved around the geometric shape with respect to the user.

19. The graphical user interface of claim 15,

wherein the graphical user interface is configured to send the input indication of the angle to the application program.

20. The graphical user interface of claim 15,

wherein the application program is configured to

determine a parameter for a head-related transfer function based on the received input indicating the angle, modify an audio signal in accordance with the head-related transfer function based on the determined parameter; and

output the modified audio signal.

21. The graphical user interface of claim 15,

wherein the application program window further is configured to receive from the user an input indicating a head size of the user.

22. The graphical user interface of claim 21,

wherein the application program is further configured to determine the parameter for the head-related transfer function further based on the received input indicating the head size of the user.

23. The graphical user interface of claim 15,

wherein the application program window further is configured to receive from the user an input indicating a head shape of the user.

24. The graphical user interface of claim 23,

wherein the application program is further configured to determine the parameter for the head-related transfer function further based on the received input indicating the head shape of the user.

25. The graphical user interface of claim 15,

wherein the application program window further is configured to receive from the user an input indicating an ear size of the user.

26. The graphical user interface of claim 25,

wherein the application program is further configured to determine the parameter for the head-related transfer function further based on the received input indicating the ear size of the user.

27. The graphical user interface of claim 15,

wherein the application program window further is configured to receive from the user an input indicating an ear shape of the user.

28. The graphical user interface of claim 27,

wherein the application program is further configured to determine the parameter for the head-related transfer function further based on the received input indicating the ear shape of the user.

29. The graphical user interface of claim 15,

wherein the application program window comprises visual representations of a plurality of speakers and wherein the input is for inputting an angle for each of the visual representation of each speaker.

30. The graphical user interface of claim 15,

wherein the application program window further comprises a sender input for receiving an input for instructing the application program to send a parameter determined based on the input angle to a server in a cloud.

31. The graphical user interface of claim 15,

wherein the application program window further comprises a receiver for receiving a parameter for a head-related transfer function from a server in a cloud;

wherein the application program is configured to modify the audio signal in accordance with the head-related transfer function based on the received parameter and to output the modified audio signal.

Description:
METHODS FOR OUTPUTTING A MODIFIED AUDIO SIGNAL

AND GRAPHICAL USER INTERFACES PRODUCED BY AN APPLICATION PROGRAM

Technical Field

[0001] Various embodiments generally relate to methods for outputting a modified audio signal and to graphical user interfaces produced by an application program.

Background

[0002] A head-reflectance transfer function (or head-related transfer function; HRTF) may be applied to an incoming analog stereo audio signal in order to create the illusion of a multi-channel audio system through typical stereo headphones. This HRTF may have to be calibrated to a specific user.

Summary of the Invention

[0003] According to various embodiments, a method for outputting a modified audio signal may be provided. The method may include: receiving from a user an input indicating an angle; determining a parameter for a head-related transfer function based on the received input indicating the angle; modifying an audio signal in accordance with the head-related transfer function based on the determined parameter; and outputting the modified audio signal.

[0004] According to various embodiments, a graphical user interface produced by an application program may be provided. The graphical user interface may include: an application program window generated by the application program, wherein the application program window may include: a visual representation of the user; a visual representation of a speaker on a geometric shape around the user; and an input for inputting an indication of an angle on the geometric shape with respect to the visual representation of the speaker.

Brief Description of the Drawings

[0005] In the drawings, like reference characters generally refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead generally being placed upon illustrating the principles of the invention. The dimensions of the various features or elements may be arbitrarily expanded or reduced for clarity. In the following description, various embodiments of the invention are described with reference to the following drawings, in which:

[0006] FIG. 1 shows a flow diagram illustrating a method for outputting a modified audio signal in accordance with an embodiment;

[0007] FIG. 2 shows an audio output device in accordance with an embodiment;

[0008] FIG. 3 shows a graphical user interface in accordance with an embodiment;

[0009] FIG. 4 shows a graphical user interface in accordance with an embodiment;

[0010] FIG. 5 A shows a diagram of an application program window in accordance with one embodiment;

[0011] FIG. 5B shows a plurality of ear shapes;

[0012] FIG. 5 shows a screen shot of a graphical user interface for calibrating virtual speaker positions in accordance with an embodiment;

[0013] FIG. 6A shows a screen shot of a graphical user interface for calibrating virtual speaker positions in accordance with an embodiment;

[0014] FIG. 6B shows a screen shot of a graphical user interface in accordance with an embodiment, wherein a virtual speaker location marker is shown when the virtual speaker location 616 is selected;

[0015] FIG. 7 shows a screen shot of a graphical user interface or application program window in accordance with an embodiment, wherein an audio output device may be set; [0016] FIG. 8 shows a screen shot of a graphical user interface or application program window in accordance with an embodiment, wherein general audio output parameters may be set;

[0017] FIG. 9 shows a screen shot of a graphical user interface or application program window in accordance with an embodiment, wherein equalizer parameters may be set;

[0018] FIG. 10 shows a screen shot of a graphical user interface or application program window in accordance with an embodiment, wherein the position of a virtual speaker may be adjusted; —

[0019] FIG. 11 shows a screen shot of a graphical user interface in accordance with an embodiment, wherein a marker indicating an angle for the chosen virtual speaker may be set;

[0020] FIG. 12 shows a screen shot of a graphical user interface in accordance with an embodiment, wherein a marker indicating an angle for the chosen virtual speaker may be set;

[0021] FIG. 13 shows a screen shot of a graphical user interface or application program window showing alternative representation of the speakers; and

[0022] FIG. 14 shows an application window according to an embodiment;

Detailed Description

10023] The following detailed description refers to the accompanying drawings that show, by way of illustration, specific details and embodiments in which the invention may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention. Other embodiments may be utilized and structural, and logical changes may be made without departing from the scope of the invention. The various embodiments are not necessarily mutually exclusive, as some embodiments can be combined with one or more other embodiments to form new embodiments. [0024] In order that the invention may be readily understood and put into practical effect, particular embodiments will now be described by way of examples and not limitations, and with reference to the figures.

[0025] The audio output device may include a memory which is for example used in the processing carried out by the audio output device. A memory used in the embodiments may be a volatile memory, for example a DRAM (Dynamic Random Access Memory) or a non- volatile memory, for example a PROM (Programmable Read Only Memory), an EPROM (Erasable PROM), EEPROM (Electrically Erasable PROM), or a flash memory^ e.g., a floating gate memory, a charge trapping memory, an MRAM (Magnetoresistive Random Access Memory) or a PCRAM (Phase Change Random Access Memory).

[0026] In an embodiment, a "circuit" may be understood as any kind of a logic implementing entity, which may be special purpose circuitry or a processor executing software stored in a memory, firmware, or any combination thereof. Thus, in an embodiment, a "circuit" may be a hard-wired logic circuit or a programmable logic circuit such as a programmable processor, e.g. a microprocessor (e.g. a Complex Instruction Set Computer (CISC) processor or a Reduced Instruction Set Computer (RISC) processor). A "circuit" may also be a processor executing software, e.g. any kind of computer program, e.g. a computer program using a virtual machine code such as e.g. Java. Any other kind of implementation of the respective functions which will be described in more detail below may also be understood as a "circuit" in accordance with an alternative embodiment. It will be understood that what is described herein as circuits with different names (for example "circuit A" and "circuit B") may also be provided in one physical circuit like described above.

[0027] It will be understood that a geometric shape may be or may include a conic section, for example a circle, an ellipse, a parabola or a hyperbola, or may include or may be a polygon, or may include or may be any other kind of geometric shape.

[0028] Various embodiments are provided for devices, and various embodiments are provided for methods. It will be understood that basic properties of the devices also hold for the methods and vice versa. Therefore, for sake of brevity, duplicate description of such properties may be omitted.

[0029] It will be understood that any property described herein for a specific device may also hold for any device described herein. It will be understood that any property described herein for a specific method may also hold for any method described herein. Furthermore, it will be understand that for any device or method described herein, not necessarily all the components or steps described must be enclosed in the device or method, but only some (but not all) components or steps may be enclosed.

[0030] FIG. 1 shows a flow diagram 100 illustrating a method for outputting a modified audio signal in accordance with an embodiment. In 102, an input indicating an angle may be received from a user. In 104, a parameter (or a plurality of parameters) for a head-related transfer function may be determined based on the received input indicating the angle. In 106, an audio signal may be modified in accordance with the head-related transfer function based on the determined parameter (or the plurality of determined parameters). In 108, the modified audio signal may be output.

[0031] According to various embodiments, the input indicating the angle may be a graphical input indicating the angle by a point on a geometric shape.

[0032] According to various embodiments, the input indicating the angle may be a graphical input indicating the angle by a direction from a center of a geometric shape.

[0033] According to various embodiments, the input indicating the angle may be a real number indicating the angle.

[0034] According to various embodiments, the method may further include displaying a presently set angle. According to various embodiments, the receiving the input indicating the angle from the user may include receiving an indication for increasing or decreasing the angle in response to the displaying. According to various embodiments, the method may further include setting the angle based on the indication.

[0035] According to various embodiments, the method may further include receiving from the user an input indicating a head size of the user. [0036] According to various embodiments, the parameter (or the plurality of parameters) for the head-related transfer function may be determined further based on the received input indicating the head size of the user.

[0037] According to various embodiments, the method may further include receiving from the user an input indicating a head shape of the user.

[0038] According to various embodiments, the parameter (or the plurality of parameters) for the head-related transfer function may be determined further based on the received input indicating the head shape of the user.

[0039] According to various embodiments, the method may further include receiving from the user an input indicating an ear size of the user.

[0040] According to various embodiments, the parameter (or the plurality of parameters) for the head- related transfer function may be determined further based on the received input indicating the ear size of the user.

[0041] According to various embodiments, the method may further include receiving from the user an input indicating an ear shape of the user.

[0042] According to various embodiments, the parameter (or the plurality of parameters) for the head-related transfer function may be determined further based on the received input indicating the ear shape of the user.

[0043] According to various embodiments, the receiving and the determining may be performed for a plurality of virtual speaker positions.

[0044] According to various embodiments, the method may further include sending the determined parameter (or plurality of determined parameters) to a se er in a cloud.

[0045] According to various embodiments, the method may further include: receiving a parameter (or a plurality of parameters) for the head-related transfer function from a server in a cloud; modifying the audio signal in accordance with the head-related transfer function based on the received parameter (or the plurality of received parameters); and outputting the modified audio signal.

[0046] According to various embodiments, the parameter (or the plurality of parameters) for the head-related transfer function may be determined based on the received input indicating the angle using a lookup table storing a relation between angles and parameters (or the plurality of parameters).

[0047] FIG. 2 shows an audio output device 200 in accordance with an embodiment. The audio output device 200 may include an input circuit 202 configured to receive from a user an input indicating an angle. The audio output device 200 may further include a determination circuit 204 configured to determine a parameter (or a plurality of parameters) for a head-related transfer function based on the received input indicating the angle. The audio output device 200 may further include a modification circuit 206 , configured to modify an audio signal in accordance with the head-related transfer function based on the determined parameter (or the plurality of determined parameters). The audio output device 200 may further include an output circuit 208 configured to output the modified audio signal. The input circuit 202, the determination circuit 204, the modification circuit 206, and the output circuit 208 may be connected via a connection 210 (or a plurality of separate connections), for example an electrical or optical connection, for example any kind of cable or bus.

[0048] According to various embodiments, the input circuit 202 may be configured to receive a graphical input indicating the angle by a point on a geometric shape.

[0049] According to various embodiments, the input circuit 202 may be configured to receive a graphical input indicating the angle by a direction from a center of a geometric shape.

[0050] According to various embodiments, the input circuit 202 may be configured to receive a real number indicating the angle.

[0051] According to various embodiments, the audio output device 200 may further include a display circuit (not shown) configured to display a presently set angle. According to various embodiments, the input circuit 202 may be configured to receive from the user an indication for increasing or decreasing the angle in response to the displayed presently set angle. According to various embodiments, the audio output device 200 may further include a setting circuit (not shown) configured to set the angle based on the indication. [0052] According to various embodiments, the input circuit 202 may further be configured to receive from the user an input indicating a head size of the user.

[0053] According to various embodiments, the determination circuit 204 may further be configured to determine the parameter (or the plurality of parameters) for the head- related transfer function based on the received input indicating the head size of the user.

[0054] According to various embodiments, the input circuit 202 may further be configured to receive from the user an input indicating a head shape of the user.

[0055] According to various embodiments, the determination circuit 204 may further be configured to determine the parameter (or the plurality of parameters) for the head- related transfer function based on the received input indicating the head shape of the user.

[0056] According to various embodiments, the input circuit 202 may further be configured to receive from the user an input indicating an ear size of the user.

[0057] According to various embodiments, the determination circuit 204 may further be configured to determine the parameter (or the plurality of parameters) for the head- related transfer function based on the received input indicating the ear size of the user.

[0058] According to various embodiments, the input circuit 202 may further be configured to receive from the user an input indicating an ear shape of the user.

[0059] According to various embodiments, the determination circuit 204 may further be configured to determine the parameter (or the plurality of parameters) for the head- related transfer function based on the received input indicating the ear shape of the user.

[0060] According to various embodiments, the input circuit 202 and the determination circuit 204 may be configured to perform the receiving and the determining for a plurality of virtual speaker positions.

[0061] According to various embodiments, the audio output device 200 may further include a sending circuit (not shown) configured to send the determined parameter (or the plurality of determined parameters) to a server in a cloud.

[0062] According to various embodiments, the audio output device 200 may further include a receiving circuit (not shown) configured to receive a parameter (or a plurality of parameters) for the head-related transfer function from a server in a cloud. According to various embodiments, the modification circuit 206 may be configured to modify the audio signal in accordance with the head-related transfer function based on the received parameter (or the plurality of received parameters). According to various embodiments, the output circuit 208 may be configured to output the modified audio signal.

[0063] According to various embodiments, the determination circuit 204 may be configured to determine the parameter (or the plurality of parameters) for the head-related transfer function based on the received input indicating the angle using a lookup table storing a relation between angles and parameters (or the plurality of parameters).

[0064] FIG. 3 shows a graphical user interface 300 in accordance with an embodiment. The graphical user interface 300 may for example be displayed on a computer screen 302. The graphical user interface 300 may include an application program window 304 generated by the application program. The application program window 304 may include a visual representation 306 of the user (here shown as a geometric shape). The application program window 304 may further include a visual representation 308 of a speaker (here shown as a geometric shape) on a geometric shape 310 around the user. It will be understood that the geometric shape 310 in the graphical user interface may be displayed as any kind of geometrical form, for example an ellipse, wherein the user 306 is located at the center of the geometric shape, for example the ellipse 310. The speaker 308 is movable along the geometric shape 310, wherein the positioning of the speaker 308 translates to inputting an indication of an angle on the geometric shape 310 with respect to the user 306. In an alternative embodiment, the application program window 304 may further include an input 312 for inputting an indication of an angle on the geometric shape with respect to the visual representation of the speaker (here shown as a geometric shape that may be moved on the geometric shape 310), wherein the input 312 is associated to the speaker 308.The moving of the speaker 308 adjusts the angle.

[0065] FIG. 4 shows an alternative implementation 400. In the alternative implementation, a movable input 312 may be used to adjust the angle.

[0066] According to various embodiments, the visual representation 308 of the speaker may include an image of a speaker or other images of output speaker device. [0067] According to various embodiments, the visual representation 308 of the speaker may include a ball.

[0068] According to various embodiments, the input 3 12 may include a marker configured to be moved on the geometric shape 3 10.

[0069] According to various embodiments, the input 312 may include the visual representation of the speaker configured to be moved on the geometric shape.

[0070] According to various embodiments, the visual representation 308 of the speaker or the input 312 may include a needle of a compass configured to be moved around the geometric shape 310 with respect to the user 306.

[0071] According to various embodiments, the graphical user interface 300 may be configured to send the input indication of the angle to the application program.

[0072] According to various embodiments, the application program may be configured to: determine a parameter (or a plurality of parameters) for a head-related transfer function based on the received input indicating the angle, modify an audio signal in accordance with the head-related transfer function based on the determined parameter (or the plurality of determined parameters); and output the modified audio signal.

[0073] According to various embodiments, the application program window 304 may further be configured to receive from the user an input indicating a head size of the user.

[0074] According to various embodiments, the application program may further be configured to determine the parameter (or the plurality of parameters) for the head-related transfer function further based on the received input indicating the head size of the user.

[0075] According to various embodiments, the application program window 304 may further be configured to receive from the user an input indicating a head shape of the user.

[0076] According to various embodiments, the application program may further be configured to determine the parameter (or the plurality of parameters) for the head-related transfer function further based on the received input indicating the head shape of the user.

[0077] According to various embodiments, the application program window 304 may further be configured to receive from the user an input indicating an ear size of the user. [0078] According to various embodiments, the application program may further be configured to determine the parameter (or the plurality of parameters) for the head-related transfer function further based on the received input indicating the ear size of the user.

[0079] According to various embodiments, the application program window 304 may further be configured to receive from the user an input indicating an ear shape of the user.

[0080] According to various embodiments, the application program may further be configured to determine the parameter (or the plurality of parameters) for the head-related transfer function further based on the received input indicating the ear shape of the user.

[0081] According to various embodiments, the application program window 304 may include visual representations of a plurality of speakers and the input 312 may be for inputting an angle for each of the visual representation of each speaker.

[0082] According to various embodiments, the application program window 304 may further include a sender input for receiving an input for instructing the application program to send a parameter (or a plurality of parameters) determined based on the input angle to a server in a cloud.

[0083] According to various embodiments, the application program window 304 may further include a receiver for receiving a parameter (or a plurality of parameters) for a head-related transfer function from a server in a cloud. According to various embodiments, the application program may be configured to modify the audio signal in accordance with the head-related transfer function based on the received parameter (or the plurality of received parameters) and to output the modified audio signal.

[0084] According to various embodiments, the application program may be configured to determine a parameter (or a plurality of parameters) for a head-related transfer function based on the input indication of the angle using a lookup table storing a relation between angles and parameters (or the plurality of parameters) for the head- related transfer function.

[0085] According to various embodiments, there may be seven speakers in the graphical user interface (UI), and the seven speakers may represent a 7.1 sound system. Typical sound systems may be 5.1 (e.g. cinema theatre). So, the UI may be provided to (i) allow user to calibrate the sound setting of a 7.1 audio headset and/ or (ii) perform virtualization so that a 2.1 or 5.1 headset sounds like a 7.1 system to the user.

[0086] According to various embodiments, a method of HRTF calibration may be provided.

[0087] A head-reflectance transfer function (or head-related transfer function; HRTF) may be applied to an incoming analog stereo audio signal in order to create the illusion of a multi-channel audio system through typical stereo headphones. According to various embodiments, a method of calibrating an HRTF system using a graphical user interface to position virtual speaker positions may be provided, in a way which is easy to understand and manipulate by a novice user with no prior experience in tuning an HRTF. According to various embodiments, further the association of the HRTF calibration parameters determined by the user with a unique cloud identifier for that user, storing these settings for use across any device connecting to the cloud service may be provided. Cloud identification may enable not only the storage of an HRTF calibration profile for a particular user, but also the machine and devices in the audio reproduction environment such as the digital-to-analog converter (DAC), headphone amplifier, and the headphones or headset used to reproduce sound.

[0088] A head-related transfer function (HRTF) may be a response that may describe how an ear receives a sound from a point in space; a pair of HRTFs for two ears may be used to synthesize a binaural sound that seems to come from a particular point in space. It may be a transfer function, describing how a sound from a specific point will arrive at the ear (generally at the outer end of the auditory canal). Some consumer home entertainment products designed to reproduce surround sound from stereo (two-speaker) headphones may use HRTFs. Some forms of HRTF-processing may have also been included in computer software to simulate surround sound playback from loudspeakers.

[0089] Humans have just two ears, but can locate sounds in three dimensions - in range (distance), in direction above and below, in front and to the rear, as well as to either side. This may be possible because the brain, inner ear and the external ears (pinna) may work together to make inferences about location. This ability to localize sound sources may have developed in humans as an evolutionary necessity, since the eyes may only see a fraction of the world around a viewer, and vision may be hampered in darkness, while the ability to localize a sound source works in all directions, to varying accuracy, and even in the dark.

[0090] Humans may estimate the location of a source by taking cues derived from one ear (monaural cues), and by comparing cues received at both ears (difference cues or binaural cues). Among the difference cues may be time differences of arrival and intensity differences. The monaural cues may come from the interaction between the sound source and the human anatomy, in which the original source sound may be modified before it enters the ear canal for processing by the auditory system. These modifications may encode the source location, and may be captured via an impulse response which may relate the source location and the ear location. This impulse response may be termed the head-related impulse response (HRIR): Convolution of an arbitrary source sound with the HRIR may convert the sound to that which would have been heard by the listener if it had been played at the source location, with the listener's ear at the receiver location. HRIRs may have been used to produce virtual surround sound.

[0091] The HRTF may be the Fourier transform of HRIR. The HRTF may also be referred to as the anatomical transfer function (ATF).

[0092] HRTFs for left and right ear (expressed above as HRIRs) may describe the filtering of a sound source (x(t)) before it is perceived at the left and right ears as xL(t) and xR(t), respectively.

[0093] The HRTF may also be described as the modifications to a sound from a direction in free air to the sound as it arrives at the eardrum. These modifications may include the shape of the listener's outer ear, the shape of the listener's head and body, the acoustical properties of the space in which the sound is played, and so on. All these properties may influence how (or whether) a listener may accurately tell what direction a sound is coming from.

[0094] HRTFs may vary significantly from person to person. Perceptual distortions may occur when one listens to sounds spatialized with non-individualized HRTF. This focus on a "one size fits all" approach to HRTF may assume average physiology and morphology across all users. The size of the head and placement of the headphones or headset may be a critical determining factor in how the coefficients are created and applied to the filter. Additionally, the shape of the ear and size of the ears may have a major impact on how sounds propagate from the drivers to the inner ear. As such, one size or type of filter does not fit all listeners, resulting i poor performance of the virtual surround sound system as it targets the average values found across all humans.

[0095] Additionally, a user may establish settings which may be unique to that user, and those settings may not persist across multiple systems and device chains, as the parameters established by the user during calibration remain locked into that particular device for which settings were configured.

[0096] According to various embodiments, the user may calibrate the HRTF filter so that it works best for them, and then may save those settings to the client and mirroring them into the cloud for use on any client in the future. By associating the HRTF calibration parameters used on one audio reproduction system (such as a personal computer) with a unique cloud identification system, the user may configure or calibrate the HRTF algorithm on a singular device or system and have those systems persist across a multitude of devices and systems through software interface which authenticate them and transport their profile settings from the cloud into that system.

[0097] By allowing the user to provide more information about their morphological parameters such as head size, ear size and shape of the ears, as well as positioning of the virtual surround sound positions within the sound field, the listener may be provided with a much more accurate and personalized virtual surround sound experience.

[0098] According to various embodiments, a graphical user interface for the calibration of an HRTF and cloud services which synchronize a singular device or system with a multitude of devices and systems through the use of unique identifiers for the user and the device(s) and machine(s) used to calibrate the HRTF algorithm may be provided. This may be based on user input to determine offsets in the virtual speaker positions, selection of head size, ear size and ear shape, which determine the appropriate HRTF coefficients to apply within the audio filter. For example, HRTF coefficients may be stored in the cloud and downloaded to the client, or may be included in the installation and local-only. Once the user calibrates, and selections of HRTF coefficients are made, the user's chosen configuration may be stored in the cloud for use on any other PC client they log into.

[0099] The first selection that a user may make is his or her head size, which may be a subjective selection given a set of options such as small, medium and large, corresponding to the circumference of the head as measured at the brow and around the largest area of the back of the head. This may be similar in approach to that of hat size measurement. The options may be relative to an average circumference of 58 cm, with a standard deviation of +/- 7 cm.

[00100] The second selection that a user may make is his or her head shape, based on a set of provided options such as round, oval, and inverted egg.

[00101] The third selection that a user may make is his or her ear size, which may be a subjective selection given a set of options such as small, medium and large, corresponding to the size of their outer ear (pinna, or auricle) relative to an average of 6 cm with a standard deviation of +/- 1.5 cm.

[00102] The fourth selection that a user makes may make is his or her ear shape, which may be a subjective selection by the user of 8 common ear shape types.

[00103] FIG. 5A is a diagram of an application program window 400 in accordance with one embodiment of the present invention. The application program window 400 includes a first sub-window 401, a second sub-window 402, a third sub-window 403 and a fourth sub- window 404. The first sub-window 401 provides means for the users to select their head size. For example, the first sub-window 401 includes an input window for the user to input/type their head circumference size. In another embodiment, the first sub-window 401 may contain a drop-down menu/list with preset head circumference sizes, e.g. from 51cm to 65cm, wherein the preset sizes are selectable by the user. In yet another embodiment, the first sub-window 401 includes a plurality of images illustrating the different head sizes, e.g. a first image with a range of 51cm-55cm, a second image with a range of 56cm-60cm, and a third image with a range of 61cm-65cm. The users select the image with the closest range to their head size. Alternatively, the first sub- window may be a combination of the input window, the drop-down menu, or the plurality of images to allow flexibility in selecting the head size. [00104] The second sub-window 402 provides means for the users to select their head shape. In one embodiment, the second sub-window 402 includes a drop-down menu/list with preset head shapes, e.g. round, oval, and inverted egg, which are selectable by the user. In another embodiment, the second sub-window 402 includes a plurality of images illustrating the different head shapes, and the users select the closest image to their head shape. In yet another embodiment, the second sub-window 402 is a combination of the drop-down menu with preset head shapes and plurality of images with different head shapes.

[00105] The third sub-window 403 provides means for the users to select their ear size. In one embodiment, the third sub-window 403 includes an input window for the user to input/type their ear size. In another embodiment, the third sub-window 403 may contain a drop-down menu/list with preset ear sizes, e.g. outer ear size of about 4.5cm to 7.5cm, wherein the preset sizes are selectable by the user. In yet another embodiment, the third sub-window 403 includes a plurality of images illustrating the different ear sizes, e.g. a first image with a range of 4.5cm-5.0cm, a second image with a range of 5.1cm-5.5cm, a third image with a range of 5.6cm-6.0cm, etc. The users select the image with the closest range to their ear size. Alternatively, the third sub-window 403 may be a combination of the input window, the drop-down menu, or the plurality of images.

[00106] The fourth sub-window 404 provides means for the users to select their ear shape. In one embodiment, the fourth sub-window 404 includes a plurality of images illustrating the different ear shapes to allow the users to select their ear shape that is closest to one of the images. FIG. 5B shows the fourth sub-window 404 with a plurality of images representing common ear shapes.

[00107] According to various embodiments, further adjustments may be to the positioning of virtual surround sound speaker locations by the user, to personalize his or her listening experience. This method may enable the user to more fully realize the surround sound spatiality by making adjustments to the virtual speaker location in the graphical user interface, which may be translated into adjustments to the HRTF coefficients for each speaker positions. The results of such a graphical method of calibration may be immediately apparent to the user, as he or she may perceive the changes in virtual speaker positions during the calibration steps.

[00108] In one embodiment, the method may begin by instructing the user to place his or her preferred headphones onto his or her head, while an audio clip is playing which cycles through the default virtual speaker locations. The user may hear what these default positions sound like given his or her morphological parameters such as head size and ear shape, as well as the mechanical design and other characteristics of his or her preferred headphones.

[00109] FIG. 6A shows a screen shot 601 of a graphical user interface for calibrating virtual speaker positions in accordance with an embodiment. The user may select one of the virtual speaker locations 611 to 617 and may be presented with a respective marker highlighting the position of the audio at the default angle (azimuth) relative to the head/user representation 670 at the center of the sound field.

[00110] FIG. 6B shows a screen shot 602 of a graphical user interface in accordance with an embodiment, wherein a virtual speaker location marker 626 is shown when the virtual speaker location 616 is selected. The user may choose to adjust the position of this marker 626 that is associated only to virtual speaker location 616, which may result in adjustments made to the HRTF coefficients applied by the filter, in order to shift the perceived audio origination point (from the virtual speaker location 616) around the sound field. For example, by positioning of the virtual surround sound speaker locations by the user, the user may "freely" shift/move the 7 virtual speaker locations 61 1 to 617 within the geometric shape 650.

[00111] By repeating this process of selecting a speaker location and then adjusting the point of origin of the sounds played through the filter and originating at this point, the user may fully customize the sound field to his or her preference. According to various embodiments, this may enable the user to achieve a subjectively better virtual surround sound audio experience through real-time adjustments of the speaker positions with a synthesized multi-channel surround sound audio source playing through the HRTF filter being modified. [00112] When the user has completed making all desired changes to the virtual speaker locations, the new HRTF coefficients may be saved for that particular user and optionally associated with his or her preferred headphones. Other HRTF calibrations may be performed for other headphones, enabling the user to customize and calibrate his or her HRTF filter library for a plurality of headphone or headset devices.

[00113] By incorporating a graphical user interface for the user to select morphological parameters, and adjust the virtual speaker locations used to determine the HRTF coefficients used in the synthesis of multi-channel surround sound audio from a stereo audio signal, users may overcome the standard "one size fits all" approach of HRTF filters and may be provided with a calibrated virtual surround sound experience. According to various embodiments, it may be ensured that, while subjective, the filters used to synthesize the virtual surround sound environment may be tuned for the user's particular desires of the filter as well as his or her preferred headphone or headset type.

[00114] Further, by saving these settings for the user both locally on the device or system and mirroring those settings via cloud services, the appropriate HRTF filter coefficients may be applied to a wide variety of applications, and may persist across multiple devices and systems used by the user. This may ensure the best possible virtual surround sound experience no matter which system the user is currently using.

[00115] The current state of HRTF calibration may be limited to a standard set of predetermined filters, created based on objective morphological factors and may not present the user with affordances for calibrating their virtual surround sound experience using a graphical user interface to select their morphological parameters and control the positioning of virtual surround sound speaker positions.

[00116] Further, according to various embodiments, the state of the art may be advanced according to various embodiments by associating a unique profile created by the user for their device(s) and system(s) with their unique identification in the cloud service, enabling a consistent experience across multiple devices through connectivity of HRTF calibration software to the cloud service.

[00117] FIG. 7 shows a screen shot 700 of a graphical user interface or application program window in accordance with an embodiment, wherein an audio output device may be set. The application program window 700 is generated by the application program operating on a computing device. The application program is connected to a remote server over a network (e.g. Internet cloud service), wherein the application program stores user profiles on the remote server or receives stored user profiles from the remote server. When the user logins to the application program (e.g. by a unique user identification (ID) and password), the application program retrieves the stored user profiles associated with the user ID from the remote server and displays it in the on a profile sub-window 720 of the application program window 700. The profile sub-window 720 displays a list of user profiles and also enables new profiles to be created, wherein the list of user profiles is retrievable from the remote server or from the local client (i.e. the computer device). When a particular user profile is selected (e.g. "Profile" shown in FIG 7), an audio device sub-window 730 displays a list of audio devices associated with the user profile. For example, all Razer analog headphones and headsets may be provided in the list of audio devices and one of them may be selected for calibration. When a new headphone/headset is connected the computing device, the application program displays the name of the new headphone/headset in the audio device sub-window 730 if it is compatible with the application program. In one embodiment, the application program window 700 comprises a top menu bar 710, including the functions "SETTINGS", "AUDIO", "EQ" and "CALIBRATION". FIG. 7 shows the application program window 700 when the "SETTINGS" 71 1 function is selected by the user.

[00118] FIG. 8 shows a screen shot 800 of a graphical user interface or application program window in accordance with an embodiment, wherein general audio output parameters may be set. When the "AUDIO" function 712 is selected from the top menu bar 710, the application program window 800 displays an audio output sub-window 810 as well as the profile sub-window 720. The audio output sub-window 810 enables the user to adjust audio output parameters, such as but not limited to "BASS BOOST", "VOICE CLARITY", "VOLUME NORMALIZATION" and "VOLUME LEVEL". When the adjustment of the audio output parameters is complete, the application program associates or stores the data of the desired audio output parameters to the selected profile in the profile sub-window 720. The profile and the associated audio output parameters can then be stored on the remote server over the network so that it is retrievable by the user when he /she subsequently logins to the application program using the same user ID and password.

[00119] FIG. 9 shows a screen shot 900 of a graphical user interface or application program window in accordance with an embodiment, wherein equalizer (EQ) parameters may be set. When the "EQ" function 713 is selected from the top menu bar 710, the application program window 900 displays a drop-down menu with preset EQ settings 910, wherein the drop-down menu may include common EQ settings such as but not limited to "Classical", "Rock", "Dance", "Jazz", etc. Alternatively, the application program window 900 includes a plurality of EQ frequency bars that enables user to configure the desired EQ settings. Similarly, the application program window 900 includes the profile sub-window 720. When the adjustment of the EQ settings is complete, the application program associates or stores the data of the desired EQ settings to the selected profile in the profile sub-window 720. The profile and the associated EQ settings can then be stored on the remote server over the network so that it is retrievable by the user when he /she subsequently logins to the application program using the same user ID and password.

[00120] FIG. 10 shows a screen shot 1000 of a graphical user interface or application program window in accordance with an embodiment, wherein the position of a virtual speaker may be adjusted. When the "CALIBRATION" function 714 is selected from the top menu bar 710, the application program window 1000 displays a representation of a plurality of speakers 1101-1107 arranged on a circular path 1002. A representation of a user 1001 is located at the central position of the circular path 1002. By analogy to protractor measurements, the default position of the speaker 1101 is about 0 degrees from the user 1001. For the rest, speaker 1102 is positioned around 45 degrees, speaker 1 103 around 90 degrees, speaker 1104 around 135 degrees, speaker 1105 around 225 degrees, speaker 1 106 around 270 degrees and speaker 1 107 around 325 degrees relative to the user 1001 respectively. The 7 speakers 1101-1107 represent a 7.1 surround system but it can be appreciated that for other surround systems the number of speakers may vary, e.g. 5 speakers may be used to represent a 5.1 surround system. Furthermore, it can be appreciated that the circular path 1002 may take other forms/shapes, such as a square path of a rectangular path.

[00121] When the user first clicks on a "CALIBRATION" function 714 on the top menu bar 710 for opening the screen shown in FIG. 10, a surround sound audio loop may be playing, such as a helicopter, which may move around all the virtual speaker positions. At any time, the user may click on a "Test All" button 1201 to replay this surround sound audio loop and listen to all the speaker positions with any changes made. When the user clicks on a virtual speaker location (for example speaker 1102), the other speakers may fade away and the selected speaker may be highlighted, like shown in FIG. 1 1. An audio loop may play from the selected virtual speaker location.

[00122] FIG. 1 1 shows a screen shot 1100 of a graphical user interface in accordance with an embodiment, wherein a marker 1 122 indicating an angle for the chosen virtual speaker 1 102 may be set. The user may click the calibration marker 1 122 (with the shape of a ball) and may drag it around the circular path 1002 to varying degrees based on the speaker 1 102 selected to adjust the position of the sound until it appears to originate from the virtual speaker location, or their desired location. In one embodiment, the virtual speaker location 1 102 is not moveable. The calibration marker 1 122 may not end up directly on top of the speaker 1202 - it may simply be an offset of the sound to account for ear and head size, headphone type, for example. In an alternative embodiment, the user interface does not include the calibration marker 1 122 and the user may adjust the position of the sound from the speaker 1102 by clicking and dragging the speaker 1102 around the circular path 1002.

[00123] The adjustment of the calibration marker 1 122 that is associated only to speaker 1 102 results in adjustments to the HRTF coefficient associated with speaker 1102. By repeating this process of selecting a speaker and then adjusting the point of origin of the sounds played, the user may fully customize the sound field to his or her preference. When the user has completed making all desired changes to the virtual speaker locations, the new HRTF coefficients are stored with the desired profile and associated with his or her preferred headphones selected in the audio device sub-window 730 show in FIG 7. Other HRTF calibrations may be performed for other headphones, enabling the user to customize and calibrate his or her HRTF filter library for a plurality of headphone or headset devices.

[00124] The calibration marker or speaker may be moved only to a certain degree, so as for example not to move a right speaker entirely to a left side or to move a front speaker to the rear. This helps users to maintain the audio fidelity while calibrating their headphone/headset. The azimuth for each speaker 1101-1 107 may be restricted such that the user may move a selected speaker about +/-15 to 20 degrees from its default/original position. For example, as shown in the screen shot 1200 in FIG. 12, the range of angles of azimuth of speaker 1 102 may be fixed to a corresponding zone/region 1 132 so that the speaker 1102 (or its associated marker) is restricted from being moved outside the zone 1132. In other words, the speaker 1102 (or its associated marker) is only slidable around the circular path 1002 within the zone 1 132, or moveable about +/- 20 degrees from its default angle of 45 degrees (i.e. from 25 degrees to 65 degrees).

[00125] The application program determines a parameter for a HRTF based on the angle of the speaker 1 102 (or its associated calibration marker). The application program then modifies an audio signal in accordance with the HRTF based on the determined parameter, resulting in the output of the modified audio signal from the speaker 1102 to the user. In one embodiment, the application program may be configured to determine a parameter for HRTF based on the input indication of the angle using a lookup table storing a relation between angles and parameters for the HRTF. When the adjustment of the speakers 1101-1107 is complete, the application program associates or stores the parameter of the HRTF to the selected profile in the profile sub-window 720. The profile and the parameters of the HRTF can then be stored on the remote server over the network so that it is retrievable by the user when he /she subsequently logins to the application program using the same user ID and password.

[00126] In one embodiment, the application program includes an interface for the user to select the size and shape of his/her head and ear (similar to FIG 5A as discussed above) prior to the calibration of the speakers 1101-1 107 shown in FIGS 10 and 1 1. The application program determines the parameter for the HRTF based on the received input indicating the size and shape of the head and ear. The application program then modifies an audio signal in accordance with the HRTF based on the determined parameter, resulting in the output of the modified audio signal from the speakers 1 101-1 107 to the user. In one embodiment, the application program may be configured to determine a parameter for HRTF based on the input indication of the head shape, head size, ear shape or ear size using a lookup table storing a relation between the head shape, head size, ear shape, ear size and parameters for the HRTF. When the input of the size and shape of the head and ear is complete, the application program associates or stores the parameter of the HRTF to the selected profile in the profile sub-window 720. The profile and the parameters of the HRTF can then be stored on the remote server over the network so that it is retrievable by the user when he /she subsequently logins to the application program using the same user ID and password.

[00127] In one embodiment, the application program window comprises a "Reset" button to allow the user to reset the speakers 1 101-1 107 to their default positions.

[00128] In one embodiment, a checkbox may be provided, and checking the checkbox may override calibration settings saved to the desired profile, e.g. "Profile" shown in the profile sub-window 720 in FIG. 12 and apply the calibration settings globally to all profiles associated to the unique user ID.

[00129] FIG. 13 shows a screen shot 1300 of a graphical user interface or application program window showing alternative representation of the speakers. In FIG. 13, the speakers are represented by an "arrow-head" image instead of a loudspeaker image illustrated in FIG. 10. Similarly, the "arrow-head" speakers 1311-1317 are moveable on a circular path 1301. Furthermore, the application program window includes another circular path 1302 concentric to circular path 1301. The circular path 1302 includes a plurality of calibration markers positioned adjacent to the speakers 1311-1317 in their default position. When a speaker is selected (e.g. speaker 1312), the rest of the speaker fades away as shown in the application program window 1400 in FIG. 14. Similarly, the speaker 1312 includes a corresponding calibration marker 1322 that is moveable within a zone/region 1332 of the circular path 1302.

[00130] While the invention has been particularly shown and described with reference to specific embodiments, it should be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. The scope of the invention is thus indicated by the appended claims and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced.