Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
A METHOD OF PROVIDING TO USER AN INTERACTIVE MUSIC COMPOSITION
Document Type and Number:
WIPO Patent Application WO/2019/002909
Kind Code:
A1
Abstract:
The invention provides the following items: A method of providing to user an interactive music composition. A method of providing a computer game to play blindfold. The inventions are based on ability of human binaural hearing and possibility to provide a 3D sound to headphones of user from objects in virtual space. User is immersing into virtual space represented by sound objects. Using the user position and orientation in virtual space and position of every sound objects in virtual space, it is possible calculate and provide to a left and a right user's ears that user will percept 3D sound. With such 3D sound user is able to localize the sound source position inside the virtual space and interact with the sound object even blindfold. Providing in virtual space multiple not premixed music tracks as a sound sources creates possibilities for user interactive listening of the music composition.

Inventors:
LATYPOV RAY (US)
Application Number:
PCT/IB2017/053803
Publication Date:
January 03, 2019
Filing Date:
June 26, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
LATYPOV RAY (US)
International Classes:
H04S7/00
Foreign References:
US20170038943A12017-02-09
US9584912B22017-02-28
US8805561B22014-08-12
US20170041730A12017-02-09
Download PDF:
Claims:
We claim:

1 . A method of providing an interactive music composition to a user, said music composition comprising records from multiple sound sources from a group consisting of soundtracks of vocals and soundtracks of musical instruments, the method comprising the steps of: specifying initial coordinates of each of said sound sources in a virtual space; determining a position and orientation of said sound sources; determining an initial position and orientation of the user (avatar) in the virtual space; activating playing a music composition, while providing the user an ability to change its position and orientation in the virtual space; while changing by the user the position and orientation in the virtual space during listening a music composition; calculating sound volume for each user's ear and providing the sound from each of the plurality of sound sources to each user's ear according to the current coordinates of the user in the virtual space with respect to each sound source in real time.

2. The method according to claim 1 , wherein further determining the orientation of the user in the virtual space in accordance with the orientation of the user in a real space. 3. The method according to claim 1 , wherein further determining the position and orientation of the user in the virtual space in accordance with the orientation and position in a real space.

4. The method according to claim 1 or 2, wherein performing changing the position and orientation of the user in the virtual space by an interface from the group consisting of a touch screen, joystick, mouse, additional gadget and position and orientation sensors.

5. The method according to claim 1 , wherein altering the position and orientation of the sound sources in the virtual space.

6. A method of providing a computer game to play blindfold, comprising the steps of: activating an application which forms a model of a virtual space formed by sound objects that represent sound sources; immersing the user into the virtual space, providing the user with stereo headphones mounted on the user's head; determining position and orientation of the user in the virtual space; using data of the position and orientation of the user in the virtual space with respect to said sound sources in a real time; calculating sound parameters of each of said sound sources in the virtual space for each of the ears of the user; and providing the sound to left and right earphone for the left and right ear of the user, the user is able to navigate in space relative to the sound sources, through binaural hearing, in order to interact with the objects of the virtual space.

7. The method according to claim 6, determining the orientation of the user in the virtual space in accordance with the user's head orientation in a real space.

8. The method according to claim 6 or 7, wherein determining the user's head orientation using the gadget sensor, which is in the hands of the user, providing a synchronous rotation of the gadget with the user's head turns.

9. The method according to claim 8, using a smart phone having orientation sensor as the gadget.

10. The method according to claim 6, using as the sound objects from the group consisting of continuously murmuring, tinkling, buzzing and singing (bees, wasps, bumblebees, flies, gadflies, chirping gnats, mosquitoes, animated music players, singing objects, multi-copters and drones).

Description:
A METHOD OF PROVIDING TO USER AN INTERACTIVE MUSIC COMPOSITION

Field of Art 3D sound is a dimensional sound correctly calculated that comes to stereo

headphones and allows the user to localize a sound source in the virtual space so to define location of the sound source intuitively.

Binaural hearing has special possibilities which are not used completely in the modern applications such as computer games, recording and listening to music. Even if some games support a 3D sound in part, this results in minimal effect, because, as a rule, all computers and game consoles are permanently located and permanently located speakers or Dolby-Surround system and nothing depends on the user's turns relative to sound sources. Stereo headphones also have no appropriate effect as the sound does not change by head rotations. The key to 3D sound is in using data of head orientation. If doing so, knowing the orientation of the user's head in space, and use this information to correctly it is possible to reproduce the sound for each of user's two ears from the positioned sound source in the virtual space.

The distance from the sound source depends on the position and orientation of the user's head towards the sound source. For example, the closer sound source to the ear is, the louder the sound. The bigger distances difference from the sound source to the different ears is, the bigger is the time delay of arrival of the sound wave front to the farther ear. Except the difference in sound level because of difference in distance to the sound source the sound volume highly decreases for a sound

"shaded" by head that should be used by correct determination of the necessary sound level for different ears. In such case the shading goes on in different way depending on the sound frequency, head and ears forms.

For example, time delay component is important part of Binaural hearing. But some sound engines, DirectSound, Open GL libraries and others not support its properly. Even some features of binaural sound partially implemented in some sound engines and libraries, the usage is impossible without orientation data of user's head, as either stereo speakers or even surround system cannot provide precise positioning of the sound source for all directions. Space diversity of two hearing receivers (external ears) and screening effect of the head and body with the use of diffraction effects lead to significant difference between signals transferred to the right and left ears; it enables localization of the sound source in the space that is conditioned by three physical factors: a) time (Interaural Time Difference - ITD) - resulting from time difference of arrival of the same sound phases to the left and right ears; b) intensity (Interaural Intensity Difference - IID) - resulting from different intensity values of the sound wave because of its diffraction around the head and formation of «acoustic shadow» from the side opposite to the sound source - a head-related transfer function (HRTF). c) spectrum - resulting from difference in the sound spectrum receipted by the left and right ears because of different screening effect of the head and external ears on the low- and high-frequency components of the complex sound.

Background

A sound can be represented by numerous sources: by voice, music, speaking, by a song, animals, insects, natural phenomena etc. A sound has numerous properties: pitch level (frequency), volume, directional properties, speed of propagation, attenuation. A real sound wave is not plane but spherical. Intensity of the spherical wave decreases in inverse proportion to the squared distance. By calculation of volume value for the user's ears it is necessary to account that by infinitely near approaching the sound will be maximal. This maximum is to be limited to the safe threshold to prevent hurting the user's hearing. If, e.g. the sound source in a game is an explosion, it will not increase the threshold with the raised volume at a distance by quadratic attenuation. But if the explosion in the virtual space is near the user's ear, it is necessary to transmit not a nominal calculated value when it exceeds the threshold but a threshold one. This logic can and should be set into sound engines for applications to ensure safety for hearing and health of the user.

Sound perception depends on microphone sensitivity, hearing that can have minimal and maximal thresholds of perception, specialty of sound sensitivity depending on frequency. Most of the animals including a human being have binaural hearing, they have two ears (sound detectors) mutually spaced and generally oriented in a different way. A lot of animals are able to change their ears and external ears orientation in the right direction. It means that depending on how soon a wave front arrives to the detector (an ear, a microphone) and how loud is the sound the user can determine its location (distance and direction) intuitively. The user (listener) perceives the spatial location of the sound source automatically, subconsciously by innate qualities and lefe experience. On the one hand it is an objective process laid down by the animal physiology instinctively. On the other hand it heavily depends on individual peculiarities of perception, shape of external ears, sensitivity and background experience. For example, a man who has already heard the buzzing of a bumblebee and identified its spatial location will be able to imagine pretty exactly where it is towards him (in the space) on hearing it. If a man does not know a "standard" volume of the sound source it will be difficult for him to determine the distance to it exactly even though he can pretty exactly determine the direction from where the sound comes. In 3D sound application we need firstly to give sound examples with their standard volume and show who produces them, how much do they cost and the way the price changes depending on time. The arrival of the sound reflections in the space to ears has also its effect on the perception process.

Sometimes in corridors of a building, in the city with buildings, in a forest the user hears louder sounds of the reflected sound signal especially when the source is closed from the line of sight with some obstacle. The user can come to a conclusion about the real source position intuitively or logically. It can be critical for training of military and policemen. Reflection, diffraction and interference can be also

programmed for sound engines of computer applications, for plausible sounds reproduction in the virtual space.

Summary of Invention

Each sound source has its 6-DOF coordinates. 6-DOF is 6 measurements, 3 of which are line coordinates (e.g., orthogonal coordinates X, Y, Z) and three

coordinates which locate orientation (e.g., Eulerian angles). The same orientation can be represented in a different way, e.g. uniquely described with four quaternions. Each user has his 6-DOF coordinates in the space. Both coordinates of the user and sound sources in the virtual space can be described with 6-DOF coordinates or in a different way, and their coordinates can vary with time. Modern technologies allow tracking the movements of hands and fingers in real time and in such a way to control objects in the virtual or augmented reality. For manipulated object it is profitable to bring to correspondence the three-dimensional sound and change it depending on position, orientation and manipulation by hands. For example, clenching hands to whistle with an inflatable toy, to hear a purr of a virtual cat which is stroked.

An application on the smartphone with three-dimensional sound can use for positioning a different type of sensors, including GPS. For example, the user stands in the center of a stadium indicating this place as a zero, another user can be at the stadium at the same time in another spot of the world. Applications on the gadgets can be connected in one network via Internet and exchange mutual relative data where original coordinates were converged. If the task of one user is to catch another one in the same virtual space they can even not be displayed to each other visually, and an acoustic beacon is given in their location. Another user will hear where and which side the user is and he can go in this direction. And the first one will attempt to go away from him. For example to catch means to come to the partner at the certain distance in the virtual space. It is like playing Marco-Polo in virtual space. It is almost the same as run one after another in the darkness in one real space with orientation by sound. Taking into account that the user's eyes can not get distracted from the screens, they will be in safety in the sense of collisions with objects and other people. User could be fully immersed into virtual space presented by sound sources of virtual objects, but the same time just small part of his vision could take attention to small screen of gadget. It will allow take attention on real life obstacles and dangerous situation around, such as collision with other people or wall. It is even possible to put the gadget in the pocket and fully interact over 3D sound of the application without monitoring visually the virtual environment, but fully control being immersed into sound part of the virtual world. An application can be completed with monitoring of physical activity. It is possible to monitor movements for smaller spaces with such sensors as Kinect. And one can walk around the virtual space displayed in HMD in Virtusphere? that allow to walk inside virtual space in any direction at any distance. If one plays sitting in an armchair or standing in one place it is possible to control avatar movement with touchscreen or manipulators instead of physical movement. One or several users can chase the sound beacon in the virtual space for stimulating movements at the stadium. A "flying" MP3 player can be the beacon. It means that it is possible to play the music the user would like to listen to. The user will pass or run unnoticed a fairly large distance and take the necessary exercise controlled by applications trying to get closer to the flying virtual mp3 player. Data of physical activity from portable monitors allows fine adjusting of it both from smartphones and specialized watches and wristlets.

The method enables to create a pure sound games without image at all or for a period of time. It is possible to play aurally. One can listen a sound of the object, determine its localization and, e.g. to come closer to it or move away from it or shoot in it depending on the aims of the application.

According to the invention, for calculation of the sound level from the source located at the definite place of the virtual space model a sound engine has to use sound source directivity when it makes sense, coordinates and user's head orientation in the space, head orientation and position so as the level values depend on the distance of the source to each ear, time delay of the sound arrival to each ear, taking into account the ear «shading» with head if the ear is not on the line of sight of the sound source (head-related transfer function), sound diffraction, and taking into account the sound spectral component. Sound of different frequency is shaded with head in a different way and perceived differently because of the curvature of the external ear.

It is useful to apply means to determinate the user orientation (user's head) in the space and its appropriate orientation in the virtual space; this raises accuracy of sound levels to transfer to each ears from the source in the virtual space and it allows the user to more precisely to determine localization of the sound source in virtual space. It should be noted that according to the invention applications that use three-dimensional sound can be whether assisted by 3D image of environment and objects or be without visualization and be perceived only aurally. If user prefers to play without visualization of virtual space and/or objects on the screen could remain concomitant information about game time, scores, virtual buttons and others. It is useful providing to the user with binaural sound using modification of original sound for two ears of the user with the use of calculations for transmission of correct graded volume to each ear, with calculated time delay of the sound front, using pitch level filter to ensure natural sound perception and its source localization in the virtual space.

A method for creation of applications with three-dimensional virtual space enabling the user to determine its localization of sound sources in the virtual space naturally. For this purpose all possible properties of hearing are used, its physiology and features of sound propagation in the space and user's head. Applications created according to the invention allow the user of the application to determine localization of the sound source aurally, naturally the way man does it since his birth and following his experience. One of the possibilities to use modern smart phones and tablets with rotation sensors. If we connect stereo headphones to them and transmit the correctly calculated sound according to the invention it is possible to use rotation sensor of a smartphone instead of a sensor on head because if the user holds it in hand he usually looks perpendicularly at display, this means his head and the smartphone rotate synchronously. If the smartphone is worn on head they are connected according to this invention.

A method for creation and functioning of a computer application in which 3D sound is a leading part. For successful run of the application the user should consistently position his source in the simulated 3D space, the space itself can be displayed or not.

The present invention describes a method for using multi-channel audio to interactively listen to a music by a user. The method allows to represent spatially sound sources in virtual space with the ability to move in this user space for interactive listening of these sources as a three-dimensional sound. It is possible to change settings of the sound sources in the space as well as animated space. The user in stereo headphones will have the possibility to distinguish the location of the sources in the space by means of binaural hearing with possibility to change his orientation and position towards these sound sources. With binaural hearing man is able to determine where the sound source is not only in azimuth but also estimate that it comes from above or from below. It is enough for the user to incline the head to the right or to the left and he will be able to understand exactly the height the sound sources is on even if the source is invisible. For historical reasons sounds including music and songs (soundtracks) are recorded from stationary points in relation to the artists. Even if several sound channels are recorded, e.g. several singers or separate musical instruments, all the channels are brought all together for reproduction by the user statically in two stereo channels for headphones or speakers, or at Dolby standard in more advanced case. The user has limited possibilities of interactivity such as volume change, sometimes balance change between channels or change of frequency background and tone on advanced devices. User cannot turn off any of the sources on mixed composition by his own free choice, change the volume separately for one of the sound sources because all these sound channels are already converted to a statical work, e.g. on CD or in MP3 format. Although this work is made by talented and experienced sound engineers and users can enjoy their variant of representation. But they can not listen more attentive to a certain sound source at his own wish, e.g. to a singer or guitar when they would like to. Modern facilities of microprocessors and the stated method allow listening to the pre-recorded music in a new interactive way if soundtracks of the separately recorded and not mixed music channels are saved in archives or new music by saving it in multichannel variant which is the perfect choice for the described method.

The object at the base of the present invention is to create a method of providing an interactive music composition to a user, in which user will be able to listen said music composition interactively, with possibilities to listen details of each vocals or instruments as user want by simple interface such as navigation inside a regular computer game. Another object at the base of the present invention is to create a method of providing a computer game to play blindfold, in which user will be able play said computer game reacting on the 3D sound from objects inside virtual space, using binaural features of human hearing, and ability to localize the sound source or sources.

The stated object is attained in a method of providing an interactive music

composition to a user, which consists of said music composition comprising records from multiple sound sources from the group consisting of soundtracks of vocals and soundtracks of musical instruments, specifying initial coordinates of each of sound sources in a virtual space, determining a position and orientation of said sound sources, determining an initial position and orientation of the user (avatar) in the virtual space, activating playing a music composition, while providing the user the ability to change its position and orientation in the virtual space, while changing by the user the position and orientation in the virtual space during listening a music composition, calculating sound volume for each user's ear and providing the sound from each of the plurality of sound sources to each user's ear according to the current coordinates of the user in the virtual space with respect to each sound source in real time. It is useful that the orientation of the user in the virtual space is further determining in accordance with the orientation of the user in a real space.

It is advantageous that the position and orientation of the user in the virtual space is determining in accordance with the orientation and position in a real space.

It is preferable that the position and orientation of the user in the virtual space is performing changing by an interface from the group consisting of a touch screen, joystick, mouse, additional gadget and position and orientation sensors.

It is useful that the position and orientation of the sound sources is altering in the virtual space.

It is advantageous that the user is navigating inside the music composition environment blindfold reacting to sound that user listen in headphones.

The aforesaid object is also attained in a method of providing a computer game to play blindfold, which consists of activating an application which forms a model of a virtual space formed by sound objects that represent sound sources, immersing the user into a virtual space, providing the user with stereo headphones mounted on the user's head, determining position and orientation of the user in the virtual space, using data of the position and orientation of the user in the virtual space with respect to said sound sources in a real time, calculating sound parameters of each of said sound sources in the virtual space for each of the ears of the user; and providing the sound to left and right earphone for the left and right ear of the user, the user is able to navigate in space relative to the sound sources, through binaural hearing, in order to interact with the objects of the virtual space. It is useful that the orientation of the user in the virtual space is determining in accordance with the user's head orientation in a real space.

It is advantageous that the user's head orientation is determining by using the gadget sensor, which is in the hands of the user, providing a synchronous rotation of the gadget with the user's head turns.

It is preferable that a smart phone having orientation sensor is using as the gadget.

It is advantageous that as the sound objects are using from the group consisting of continuously murmuring, tinkling, buzzing and singing (bees, wasps, bumblebees, flies, gadflies, chirping gnats, mosquitoes, animated music players, singing objects, multi-copters and drones).

It is useful to reflect on the screen of the gadget working information such as scores, virtual buttons, other interface elements such as arrows even without reflecting virtual space or objects.

Embodiment of the Invention

The most preferred application based on the method according to the invention is an application for smartphones using unmixed tracks (stems) of a music composition. The music tracks are placed as source with their coordinates into the virtual space of the application. A user with a smartphone and headphones has the ability to fully immerse in at least the virtual sound space of the application. For each ear, the application calculates the sound value from each sound source according to the user's coordinates in the virtual space. This ensures a perception of the 3- dimensional sound in space. That is, it provides complete immersion of the user in the virtual sound space, with even not fully immersed visually in the same space. In our opinion, it is advantage. The user will be able to see the virtual space on the smartphone screen and the real space around. This will ensure greater user safety compared to diving with virtual glasses. When fully immersed with glasses, the user loses the ability to see the real space, and can face real obstacles or fall off on a stairs. User moving in the virtual space, for example, using the touch screen can naturally rotate in space, combining different interfaces. In the space, the user will turn with the smartphone, whose gyroscopic sensors will be able to track the user's orientation. In accordance with this data, the position and orientation of the user in the virtual application space will change. That is, in accordance with the user's actions, the sounds of the reproduced music will interactively changing. And running the application another time and moving on a different route, the user will hear the music quite differently. The user interactively can change the perception of music in accordance with their mood, or goals.

The invention provides possibilities of full immersion in 3D sound of virtual environment, with incomplete visual immersion. This ensures the safety of the user. The user will see the environment and will not fall down the stairs or encounter an obstacle.

Most of the above-described applications with three-dimensional sound and their interfaces can be successfully complemented by the user's voice commands. It is useful to have in the application a voice recognition tool. For example, a user says "Take an object", "Run", "BANG! BANG!" - shooting. The user that holds a smartphone in front of him, or if it is worn on head, or with headphones with microphone could shoot by means of voice, move and even turn around the virtual space.

BRIEF DESCRIPTION OF THE DRAWINGS Further on the invention will be explained by concrete embodiments with reference to the accompanying drawings in which:

FIG.1 is a view illustrating shadowed right ear of the user from sound source.

FIG.2 is a view illustrating both ears can hear the sound source but differently.

FIG.3 is a view illustrating headphone with a orientation sensor connected to a gadget.

FIG.4 is a view illustrating headphone integrated with the gadget.

FIG.5 is a view illustrating headphone without orientation sensor, the orientation sensor is in the gadget. FIG.6 is a view illustrating traditional method of creating and providing a music composition.

FIG.7 is a view illustrating a disposition of music tracks as sound sources inside virtual space and three different position and orientation of user in virtual space.

FIG.8 is a view illustrating another disposition of the band and user's route.

DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS OF THE INVENTION

The present invention will now be described more fully with reference to the accompanying drawings, in which exemplary embodiments of the invention are shown. The invention may, however, be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the invention to those skilled in the art.

FIG.1 is a view from above to a user 1 with a right ear 2 and a left ear 3 and a sound source 4. A sound can freely come to the left ear of the user. But the right ear in this position and orientation is shadowed by user's head. The sound volume for the right ear will be much lower than in the left ear or absent. Even lower sound signal will be different the for the left ear not just by volume level, but by frequency characteristic. Lower frequency signal could reach the shadowed ear by diffraction, but high frequency not. Also wave front of the sound signal will come to the left ear earlier than to right ear.

FIG.2 shows another disposition of user's head according the sound source than on FIG.1 . Both ears will hear the sound signal but sound volume for the left ear will be higher than for the right ear. Also wave front of the sound signal will come to the left ear earlier than to the right ear.

FIG.3 is a view illustrating user 1 with headphone 5 . The headphone is integrated with a orientation sensor 6 connected to a gadget 7. It is preferable to detect orientation of the user's head for proper calculation 3D sound according the sound source in virtual space and use Head-Related Transformation Function (HRTF). Orientation sensor located on the head is the best option of determining the user's orientation.

FIG.4 is a view illustrating headphone 5 integrated with the gadget 8. Such device could consist GPS sensor, orientation sensor, microprocessor for 3D sound calculation and other means. The control of application and gadget can be provided by verbal commands as well.

FIG.5 is a view illustrating a headphone 9 without an orientation sensor, the orientation sensor is in the gadget 10. Preferably if a user will hold the gadget by two hand to be able synchronously turn with the gadget. In this case orientation data from orientation sensor of the gadget could be used as orientation of user. As a rule the orientation of gadget and head is same when user keeping look to a screen of the gadget perpendicular, it is happen pretty intuitively. That means that orientation of the gadget relatively constant to user's orientation when user use the gadget and look to the gadget screen . In such cases the application could use the gadget orientation as user's head orientation, with taking into consideration their mutual dispositions.

FIG.6 is a view illustrating traditional method of creating and providing a music composition. There are 6 sound tracks coming, for example, vocal 11 , rhythm guitar 12, bas-guitar 13, keyboard 14, percussion instruments (drums) 15 and guitar 16. As a rule all tracks written separately in studio. Than sound engineer (soundman) in studio 17 edits (bring together, mixes) for users (listeners) two channels of stereo records 18. This mixed music composition could be written and distributed on a type of media. All users hear the music composition as it was edited by sound engineer in studio independently from media: vinyl, cassettes, CD or mp3. Always for all users it will be the same music composition. As a rule, the all possible interactivity for the users are balancing between right and left channels and sometimes just changing the volume. There are no big difference between stereo, quadro or DolbySurround - all of them is fixed forever records.

FIG.7 is a view illustrating three different position and orientation of user-listener in virtual space. The method according the invention proposes solution for interactive listening of a music composition. There is not necessary to bring together several sound tracks to stereo in fixed way. The sound track sources 11-16 placed into virtual space each other with own coordinates. User is immersed into the virtual space. All sound tracks activating and playing inside the virtual space. User 1 will be able move in the virtual space and hear the music composition interactively. Sound engine will calculate and provide sound volume in real time for the left and right ears of the user from each sound source. The calculation should take into consideration HRTF and user's position and orientation data, sound sources coordinates data. On the FIG.7 are illustrated three different user positions and orientations 19, 20 and 21 relative to the positions of soundtracks sources. For example, position and

orientation 19 of the user allow to hear with a good volume singer (vocal 11 ) with the rhythm guitar 12, same time hear from his left with good volume percussion instruments 15. The user will hear all other instruments with lower volume as a background. Position and orientation 21 of the user allow to hear with a good volume in front of the user the guitar 16. User will be able to hear all details of this guitar, because all other instruments and vocal will be with lower volume as a background. Position 20 of the user is integral. It allow to hear all instruments and vocal with same volume same time. User will hear that he surrounded by all instruments. Maybe user will hear music composition in position 20 very close to he could hear brought together in studio same music composition. The most of all other user's positions and orientation will give different sound than on premixed

composition by sound engineer. But the key is the music mixing process will happen on the user's gadget in real time during the hearing process. It could be mixing process on the server and stream to the user's gadget and headphones, but the mixing process will happen in real time of the user's listening process. The mixing process will depend from user's actions, including his position and orientation in virtual space, that do the listening process interactive.

FIG.8 is a view illustrating another disposition of the band and user's interactive route. There is another disposition of musical instruments and vocal sources on the FIG.8 than was illustrated on FIG.7. The route 22 shows how the user changed his position during playing a part of the music composition. User will be able move in virtual space between the music instruments and vocal every next hearing time differently. Every time user will be able to hears the music composition differently and percepts new aspects and details of this music composition. Such interactive possibilities were not possible with fixed brought together and fixed music

composition by sound engineer. User hears the fixed music equally always. Same time in virtual space user will be able to move by different routes to take into consideration different aspects of music composition. Also such interactive

possibilities were not possible because the powerful processors in user's gadgets appeared just recently. The processors now able calculate sound from each music sources for each user's ears in real time.

Industrial Applicability

Absence of computers and sound engines for three-dimensional space in old times left its traces on the method of recording of music pieces. The sound even of the mutually spaced sources (channels) is recorded and mixed even if it is performed by professional sound producers in two stereo channels for the user. It remain to user just one interactivity a volume control. A more positive variant, Dolby record and reproduction is more progressive but has the same disadvantages, previously prerecorded sound without the possibility of interactive interaction with separate sources. A minimal possibility that the user has is to change the sound volume of the whole work or separate speakers but not initially recorded channels of the sound sources. Even on expensive high-end equipment the user can reinforce the sound of a certain frequency with equalizers, change the volume for stereo or Dolby channels but not of the initial sound sources. The user has no possibility to come closer to the sound source to enjoy its nuances nearby, turn towards it the way he would like. But these are limited possibilities of reception of the previously mixed sound sources that can not give the effect given by the provided in the invention method.

In the recording studios soundtracks archives of works with not mixed sound, not pre-recorded channels are held. A new method of use of such recordings is proposed in the invention that will allow the users to enjoy music and songs with new interactive possibilities providing an opportunity to feel particular nuances of each sound sources having the possibility to listen to the same work thousands of time in different ways. They can give attention to those sources they give preference to. This method will allow the holder of rights of these recordings to get additional income opening new commercial possibilities of use of these archives. They will need just create the interactive applications based on stated technology. Of course recording of new works according to this invention will allow using of music pieces more varied commercially, specially because of interactive possibilities for billions of user gadgets such as smartphones, tablets, glasses for virtual reality and other portable devices. According to the invention the use of multichannel sound for interactive applications with 3D sound will allow to create more individual, intimate and interactive music compositions. In these works-applications the user can come up with the artist or between the artists and musicians or become a central «place» for which this work is composed and in some cases even to be its participant. The key is that mixing process happen in real time on user's gadget.

It will allow the user to become a music creator to some extent (or music variants), be a sound producer or editor of this music. The user would be able to position the sound sources including animated ones for moving them in the space in a

predetermined or random way for a period of time the way he likes. The user would have the possibility of more advanced original KARAOKE. He could reproduce it by himself having decreased the volume and removed the vocal channel as well as recording it for further playback by other users. Additional novelty is that it will be possible to perform substitution in the original karaoke via any channel (sound source) or via several ones. For example, if you are a bass guitar player and are fond of percussion instruments you will have the possibility to play a part of your favorite music piece on your guitar "together" with a great artist. He will be able to listen to the music piece where he has played a part. It can be a basis for a new type of interactive games kind of "Rock Band" but with real listener participation in music. There will be applications with greater possibilities and interactivity. It is impossible to do it with previously pre-recorded music where channels are combined and mixed and brought together into stereo channels or Dolby. According to the invention the method is that each recorded channel (sound source) is set in the virtual space with its coordinates. In the simplest case it can be dotty sound sources with the sound propagation evenly around and above. In some cases orientation of the sources in the space will also be important with its function of power distribution in the space depending on orientation. For example, sound shading with the artist's head can be taken into account. Line and orientation coordinates in the application can be fixed or changed according to the scenario and/or at random or under the control of the user. Coordinates of the user in this virtual space can also be changed. A sound engine ensures at least the main properties of 3D sound, sound attenuation by the sound source removing from the listener and different calculating sound volume for the left and right ears depending on the distance to the sound source and head orientation towards the line connecting the user and the sound source. These sound properties are well known and are simply simulated for the virtual space. To the sound engine all real properties of the sound can be included or added unreal ones. There are some of additional well known properties: diffraction, interference, time difference of the sound arrival to the right and left ears, account of sounds shading by head or other obstacles, changing of reception properties depending on the sound frequency (spectral characteristics) as well as combined with the above listed properties. The user can position location of the sound source towards himself by the sound reception. For example, a singer or singers will be mutually spaced in the virtual space as guitar players, percussionist or other participants of a vocal-instrumental ensemble. The sound engine will change the sound volume of each sources depending on the distance and on the user orientation towards each source. The user in stereo headphones will hear sounds and his brain will calculate (its neural networks will give definitely enough an indication) where the sources are even if the user does not see them. It will give the user a possibility to move towards the sound sources the way he wants and determine their location by sound. For example, when a vocal part begins it is possible to come closer to the vocalist and when, e.g. plays a bass guitar to come closer to the bass guitar player insofar as it will be comfortable for him. In some applications this possibility will allow the user to preset the sound sources the way he wants and move them during the presentation. The user-listener acts there as kind of a band-master, stage director, sound producer on which actions the sound volume and accents will depend. By describing such interactive three-dimensional sound for the user's applications in details we notice that these applications can be combined with reproduction of visual imagery in the form of virtual reality, augmented reality, or in separate cases by the panoramic or simple video. These interactive applications can be used only for sound though the engine for sound calculations will use coordinates in the virtual space from the sound source to the user's ears immersed in this virtual space. It is profitable to complement such application with visual imagery with display of instruments and artists. Animation of artists and use of visual effects will show to advantage. Photos and videos implemented in the application of virtual reality could be a part of such applications. The application could be free-to-play with paid featured inside. It is useful to complement such application with a virtual guide with comments where his video display and/or sound could be turn off as you wish. This guide could provide the user with the song translation into his native language. The translation could be turned off also as a three-dimensional pony-teletext in the artist's or listener's native language. Game moments could consist in following the animated artist, approaching to instruments or the vocalist in the necessary moment when his part begins. An expert or fan would get more points because they know the music piece and can expect what comes next. An interesting task for fans would be to locate the sound sources in such way that the result of playback would match with the known variant on a record or CD. It could also be estimated in points. The user by listening and interacting within an application based on the multichannel three- dimensional sound will search the best route and points to find the best playback. The user could share his recorded routes for a certain music piece in order his relatives will be able to feel it deeply as he did. The avatar of the user can be displayed in the application. Then he and his partner connected to the same space (also displayed for the other one) would be implicated in one virtual space. It would be a shared listening. They could discuss the events together and intercommunicate. This variant would be the most applicable in social networks. A game with the multichannel three-dimensional sound where the user is looking for a good place for sounding. As during the song the best place (point) for listening can be changed.

Interfaces for application with dimensional sound

Interface variants for user interaction with the sound sources in the virtual space:

Interface for such listening can be quite diverse from providing with the possibility of physical movement in the real space in a virtual helmet with headphones (or without virtual helmet but with headphones) if the user's movement is monitored with sensors, e.g. by means of Microsoft Kinect or physical walking in the Virtusphere. In such case his movements will change his position in the virtual space allowing to approach to or move away from the sound sources or change his orientation towards them. By using a smartphone it is possible, e.g. to walk physically if the smartphone or additional sensors will monitor his movements, such as smartphones with Tango technology. A more commonly used variant with gadgets where the user will move in the virtual space like in computer games with various interfaces. The most common of them will be described below. Using of a virtual helmet with orientation sensor and headphones. Simply using a smartphone with headphones and using orientation sensor to control rotations of his body in the application in alignment with the smartphone and make movements, e.g. with touchscreen or gamepad. Using a smartphone with headphones without turning round with the smartphone (if it has not orientation sensor) for orientation but use touchscreen or gamepad for rotations and movements inside the virtual space. The last variant is suitable if the user, e.g. sits in an arm-chair in an airplane or bus and do not have the possibility to turn on his axis for orientation in the virtual space like he orients in the real space. By rotating in the virtual space (by rotating his avatar) the user practically reverse the virtual space. Having heard the sound source, e.g. at the angle of 40 degrees to the left the user with a little experience will turn the space to the right in such way that the sound source will be opposite to him in the middle of the screen. And if, e.g. this sound source is an enemy object he will be able to shoot it. Or come closer and shoot it. Or turn the weapon to its direction and shoot. Or move (run away) to the safe side. Such usage of three-dimensional sound in applications will help the game player a lot and become the main game moment. Some of described interfaces could be used for playing blindfold, with reacting on the sound from sound sources from applications without reflected virtual space or virtual objects on the screen.

Such interactive immersion into three-dimensional sound with or without display of the virtual space should have deep and clear affect on the user (possibly more deeply on the subconscious level) and it will give additional high possibilities for advertising and tuition. Such interactive immersion into the space with sources of three-dimensional sound would allows even blind or visually impaired people to play three-dimensional games because it provides the possibility to orient in the space by sounds and communicate with them interactively.

A sound source can be not only dotty like it is usually implemented in sound engines but also extended one (e.g., string of a guitar or piano). It will allow to have a dimensional, rich and natural sound even from one source if it is provided correctly.

It is possible to complement the sound in the hall with virtual sources in order to provide the user with a feeling of involvement with other virtual listener, e.g. with applause of other listeners, approving outcry mutually spaced all round. It can be a variant the user can choose by listening a music piece (or interactive application, e.g. smartphone). For example, the well known song of the band Eagles "Hotel California", over the track that probably was recorded in a studio it have been also recorded the audience reaction to the song by performance in the concert hall. It provides the involvement in the listening in the hall though the user probably listens to the track individually in his car, at home from speakers or through headphones. Closer is only variety of chamber music and salon performance of music or music pieces when the singer was in close proximity to the listener.