Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEM FOR MULTIMEDIA BROADCASTING
Document Type and Number:
WIPO Patent Application WO/2013/083133
Kind Code:
A1
Abstract:
A system for transmitting one or more simultaneous independent media channels that can be temporally and geographically synchronized between one or more recipients. A wireless broadcast system transmits one or more media channels simultaneously represented in wireless signals. Recipients, e.g. mobile phones or smartphones, receive the wireless signals and extract the media channels therein. In each of the independent media channels a time stamp and location data is included. Based on the time stamp and location data, e.g. information on location of PA loudspeakers at an event, the recipients can calculate an acoustic delay from the PA loudspeakers and thus, with a known location relative to the loudspeaker, generate an audio signal synchronized with the acoustic signal received from the PA loudspeakers. Hereby, it is possible to distribute high quality audio synchronized with a live event, e.g. at a concert, utilizing the fact that the audience to such event typically bring mobile phones with headphones.

Inventors:
SONDRUP THOMAS MOELGAARD (DK)
VESTERSKOV CLAUS ESTRUP (DK)
Application Number:
PCT/DK2012/050446
Publication Date:
June 13, 2013
Filing Date:
December 06, 2012
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
AUDUX APS (DK)
International Classes:
H04H20/61
Foreign References:
US20100215165A12010-08-26
US20090262946A12009-10-22
US7995770B12011-08-09
US20090220104A12009-09-03
US5432858A1995-07-11
US6329908B12001-12-11
Attorney, Agent or Firm:
PLOUGMANN & VINGTOFT A/S (Copenhagen S, DK)
Download PDF:
Claims:
CLAIMS

1. System for transmitting one or more simultaneous independent media channels that can be temporally and geographically synchronized between one or more recipients, where the system includes:

- a wireless broadcast system that can transmit one or more media channels simultaneously represented on one or more independent wireless signals, - one or more recipients designed to receive one or more independent wireless signals and to extract one or more media channels represented therein, wherein each of the one or more simultaneous independent media channels contains a time stamp and location data.

2. System according to claim 1, wherein the wireless broadcast system is able to perform signal processing on one or more media channels based on the recipients' individual locations toward the wireless transmitter geographical and/or virtual location.

3. System according to claim 2, where each of the one or more receivers are able to generate their own channel composition and channel signal processing based of a mutual location of the one or more multimedia channel location, the one or more receivers, and any other receivers.

4. System according to any of the preceding claims, wherein the wireless broadcast system can do a signal processing on the one or more media channels based of one or more of the following parameters: 1) the one or more recipients individual locations and orientations, 2) mutually location between one or more multimedia channel location, one or more recipients and any other recipients orientations.

5. System according to claim 4, where the wireless broadcast system can do signal processing on the one or more media channels based on both of the mentioned parameters 1) and 2).

6. System according to claim 4 or 5, where the one or more receivers are able to generate their own channel composition, and channel signal processing based on mutual location and orientation between the channel location and orientation of the one or more receivers, and any other recipients geographical locations and orientations.

7. System according to any of the preceding claims, where the wireless broadcast system can do a signal processing on the one or more media channels based on one or more of the following parameters: 1) the recipients' individual locations, 2) the recipients' individual movement speeds, 3) the recipients' individual orientations, 4) mutual geographical location between the channels location, the personal receiver and any other participating personal receivers, and 5) mutual location and orientation between the channels' location, one or more recipients and any other participating receivers.

8. System according to claim 7, where the wireless broadcast system can perform signal processing on one or more multimedia channels based of all the above parameters 1) -5).

9. System according to claim 7 or 8, where the one or more receivers are able to generate their own channel composition, and channel signal processing based on mutual location and orientation between the channel location and orientation of the one or more receivers, and any other recipients geographical locations and orientations.

10. System according to any of the preceding claims, where the wireless broadcast system can do signal processing on the one or more media channels based on one or more of the following parameters: 1) the one or more recipients of individual locations and speeds, and 2) mutual geographical location between the channels location, one or more recipients and any other recipients.

11. A system according to claim 10, where the wireless broadcast system can do signal processing on the one or more media channels based on both of the parameters 1) and 2).

12. A system according to claim 10 or 11, where the one or more receivers are able to generate their own channel composition, and channel signal processing on the basis of inter-location and speed between the channels position, the one or more receivers, and any other recipients geographical locations and speeds.

13. System according to any of the preceding claims, where the wireless broadcast system can do signal processing on the one or more media channels based on one or more of the following parameters: 1) the one or more recipients individual locations, orientations and speeds, 2) mutual location and orientation between the channels location, one or more recipients and any other recipients.

14. System according to claim 13, where the wireless broadcast system can do signal processing on the one or more media channels based on both of the parameters 1) and 2).

15. System according to claim 13 or 14, where the one or more receivers are able to generate their own channel composition and channel signal processing based on the mutual geographical location, orientation and speed of the channels locations and orientations of the one or more receivers, and any other recipients, geographical locations, orientations and speeds.

16. System according to any of the preceding claims, where the one or more recipients includes one of the following : a mobile phone such as a smartphone, a tablet, and a laptop.

17. System according to any of the preceding claims, where the wireless broadcast system is able to generate the wireless signal in accordance with an IP standard, such as IP version 4 or IP version 6.

18. System according to any of the preceding claims, where the wireless transmission system comprises a computer with software that are able to perform signal processing on the one or more media channels.

19. System according to any of the preceding claims, where the wireless broadcast system comprises at least one of the following : 1) an audio interface, such as AES3, for receiving an audio signal, 2) a position-interface, such as a NMEA interface for receiving a GPS signal, and 3) an interface, such as a DMX input for receiving a signal for synchronizing audio with one or more receivers and a light effect.

20. System according to any of the preceding claims, where the wireless system comprises another wireless transmitter that is able to transmit synchronization and channel information for the one or more recipients through a wireless signal, such as a wireless signal with one of the formats: Bluetooth, WiFi, and WiMax.

21. System according to any of the preceding claims, where there are several simultaneous and independent media channels, and wherein the individual channels are transmitted individually or multiplexed.

22. System according to any of the preceding claims, where the wireless system provides a timestamp and sends it to the one or more receivers to synchronize an audio signal in one or more media channels with the same audio signal received acoustically by the one or more recipients.

23. System according to any of the preceding claims, where the system is able to generate the Head Related Transfer Functions in order to adjust the perceived direction of a sound signal in one or more media channels with an acoustic event.

24. System according to any of the preceding claims, wherein the media channels are multimedia channels.

25. System according to any of the preceding claims, wherein the media channels comprise at least one audio signal.

26. System according to any of the preceding claims, wherein two or more of the recipients are mutually time and/or graphically synchronized.

27. System according to any of the preceding claims, wherein the location data comprises at least one of: data indicating a geographic or a virtual location of one or more media channels. 28. System according to any of the preceding claims, wherein the media channels are multimedia channels.

29. System according to any of the preceding claims, comprising an application program for a mobile device capable of enabling the mobile device to receive the one or more independent wireless signals and to extract the one or more media channels represented therein.

30. System according to claim 29, wherein the application program comprises an algorithm serving to read the time stamp and the location data, and to generate a stereo audio signal based on the one or more media channels accordingly.

31. System according to claim 30, wherein the application program comprises an algorithm capable of generating a three-dimensional stereo audio signal based on the location data and a library of Head-related Transfer Functions.

32. System according to any of the preceding claims, wherein the wireless broadcast system comprises a processing unit connected to the internet via a wired or wireless connection, and wherein the one or more independent wireless signals are transmitted to the recipients via a mobile network.

33. Use of the system according to one of the preceding claims for one or more of the following applications: video monitors in public spaces, sound enhancement to a concert, spatial settlement of media material, experience space, simultaneous settlement of media material on two or more devices such as audio, augmented reality and games.

34. Method for providing a plurality of recipient with one or more simultaneous independent media channels that can be temporally and geographically

synchronized to an event, the method comprising : - transmitting one or more media channels simultaneously

represented on one or more independent wireless signals, wherein each of the one or more simultaneous independent media channels contains a time stamp and location data, such as a time stamp and location data related to the event,

- receiving the one or more independent wireless signals at one or more recipients, wherein the one or more recipients are designed to extract the one or more media channels represented therein,

- generating an audio and/or a video signal based on the one or more media channels and the time stamp and the location data, and

- presenting the audio and/or a video signal to a user of the recipient.

Description:
SYSTEM FOR MULTIMEDIA BROADCASTING

FIELD OF THE INVENTION The present invention relates to the field of wireless communication of multimedia channels, more specifically to the field of system for streaming and synchronizing audio content to a plurality of users.

BACKGROUND OF THE INVENTION

Today you can find a lot of different systems to transmit information to one or more user-based units at the same time. Some of the current broadcast systems as DVB-T T (Digital Video Broadcasting Terrestrial), DVB-H (Digital Video

Broadcasting Handheld), DAB (Digital Audio Broadcasting), IP streaming etc. all have the possibility to transmit one or more signals at the same time.

What is characterized with the current systems is that they are good at transmitting one type of information to several users at the same time. Also these systems use different coding algorithms as MPEG (Moving Picture Experts Group) coding, which does that the signals, can be transmitted in a compact but yet still archive good quality.

The coding process that is used by the current systems can be a time consuming process when the signal should be compact and in a good quality. When a signal has to be decoded through the current principles different decoding processes is being used and finishing filters to optimize the signal is being used as well.

The disadvantage of sending out the signals by the current principles is that there is no simultaneity between several users that are receiving the same signal. Some systems can do a simple synchronization between an acoustic sound signal where the same signal is being sent out as a radio signal. Here, the systems can measure the difference in time through correlation and then put in a delay on the sound signal that is being received as a radio signal. This gives simultaneity between the acoustic signal and the signal that is being send as a radio signal. The exemplified system is patented by Clair, Jr. et al., and has a US patent number 5,432,858.

You can also find multichannel sound systems where a person with a RFID tag can chose a sound channel. Hereafter the user can move through different rooms and still get the same sound signal. The exemplified system is patented by Frecka, UD patent number 6,329,908.

There also exists FM and AM radio where the signal always can be received synchronously between several users.

Other standards as IEEE (the Institute of Electrical and Electronics Engineers) 802.1 AVB (Audio Video Bridging), consisting of IEEE 802. IAS (timing and synchronizing), IEEE 802.1Qat (Stream Reservation Protocol), IEEE 802.1Qav (queue and forwarding of time-sensitive stream) and IEEE 802. IBA (sound and video bridge systems), and ARIB (Association of Radio Industries and Businesses), STD-B24 Data Coding and Transmission Specification for Digital Broadcasting.

IEEE 802.1 AVB counts channel transfer with a very low delay and also internal time synchronizing between the channels, which mean that all receivers will be synchronous.

ARIB STD-B24 standard describes a broadcast system where several channels can be synchronized and putted together in the receivers. In the standard it is specified that the channels may consist of, video, graphics, subtitles and audio, where all the channels have the ability to be synchronous.

Summarized, the technologies current position is based on :

- Some systems, such as DVB and DAB, have a high signal compression, and thus efficient use of the available bandwidth. ARIB STD-B24 system also has the possibility of broadcasting multiple channels to the same broadcast program, and the possibility of synchronization between channels. But common trait is that receivers cannot be guaranteed synchronous.

- Individual channel selection, which follows the person from room to room via RFID. - IEEE 802.1 AVB FM and AM radio, which can be done with individually channel selection and where all recipients will be received synchronously.

SUMMARY OF THE INVENTION

Following the above description, it may be seen as an object of the present invention to provide a simple system capable of providing a plurality of users with an audio signal which is synchronized to a live event. In a first aspect, the invention provides a system for transmitting one or more simultaneous independent media channels, such as channels with data

representing an audio signal and/or a video signal that can be temporally and geographically synchronized between one or more recipients, where the system includes:

- a wireless broadcast system that can transmit one or more media channels simultaneously represented on one or more independent wireless signals,

- one or more recipients, such as a smartphone or a PC, which is designed to receive one or more independent wireless signals and to extract one or more media channels represented therein,

wherein each of the one or more simultaneous independent media channels contains a time stamp and location data, such as data indicating a geographic or a virtual location, such as a geographical location of one or more media channels.

The system is advantageous for example to provide a high quality audio signal to user's mobile phones at a concert or other event involving an audio and/or visual performance. If the location of each receiver is known, e.g. by means of GPS, it is possible to calculate, based on the time stam p, the delay of sound from the known sound source(s) positions indicated in the location data, e.g. PA

loudspeaker positions, to the receiver. Hereby, it is possible to transmit the high quality sound to the receiver synchronized with the acoustic signals from the sound source(s). Further, with known sound source positions, and possibly also the orientation of the receiver relative thereto, it is possible to apply 3D sound to the receiver by means of Head-related Transfer Functions or the like. Hereby, it is possible to provide a stereo signal to headphones or earphones which includes spatial information that will give the user the correct position of the sound source(s).

The invention is characterized by a system that can transmit multiple

simultaneous information, here called multimedia channels or channels as realtime audio, real-time video and file-based material in form of audio, video, images, text information and more, for one or more personal receivers. All the transmitted channels can be synchronized, so that, one or more channels run either synchronously between one or more individual persons or synchronously to an external duct, as a video monitor, a loudspeaker, and so on. The system also has the opportunity to synchronize receiving channels in relation to the position in relation to an object, or by the receiver's location.

This ensures that real-time channels can be synchronized with other real-time channels, and especially file-based channels, and also synchronized in time or in distance over other virtual or physical objects.

In order to achieve a more realistic representation of the channels the invention also makes it possible to impose filters, such as treble attenuation and/or 3D audio filtering on an audio channel so that the recipient has the option to determine the placement of the virtual sound source.

The system utilizes the fact that many people have smartphones with a significant processing power and with high quality headphones or earphones, thus providing the platform for receiving the media channel(s) and calculating a delay to the sound source(s) and possibly directions thereto to provide the user with a high quality audio signal which is synchronized with the acoustical event. Thus, the system can be implemented with a rather simple wireless broadcast system and a suitable program to the user's mobile phones. In certain embodiments, the user's mobile phone is used to generate a 3D sound coherent with the user's location and orientation relative to one or more sound sources of the event.

In summary, in preferred embodiments, the system includes the following features: - Synchronization of one or more channels between multiple recipients, the possibility of temporal playback simultaneously, on multiple recipients.

- Possibility of synchronization to external events.

- Possibility of synchronization to physical or virtual objects.

- Possibility of channel filtering and 3D location of virtual objects.

In a second aspect, the invention provides use of the system according to one of the preceding claims for one or more of the following applications: video monitors in public spaces, sound enhancement to a concert, spatial settlement of media material, experience space, simultaneous settlement of media material on two or more devices such as audio, augmented reality, games and different kinds of simulators.

In a third aspect, the invention provides a method for providing a plurality of recipient with one or more simultaneous independent media channels that can be temporally and geographically synchronized to an event, the method comprising :

- transmitting one or more media channels simultaneously represented on one or more independent wireless signals, wherein each of the one or more simultaneous independent media channels contains a time stamp and location data, such as a time stamp and location data related to the event,

- receiving the one or more independent wireless signals at one or more recipients, wherein the one or more recipients are designed to extract the one or more media channels represented therein,

- generating an audio and/or a video signal based on the one or more media channels and the time stamp and the location data, and

- presenting the audio and/or a video signal to a user of the recipient.

With this method, the recipient, e.g. a mobile phone, is used to generate an audio and/or video signal synchronized with the event, e.g. a live event such as a concert or the like.

The first, second, and third aspects may each be combined with any of the other aspects. These and other aspects of the invention will be apparent from and elucidated with reference to the embodiments described hereinafter. BRIEF DESCRIPTION OF THE FIGURES

Embodiments of the invention will be described in more detail in the following with regard to the accompanying figures. The figures show one way of implementing the present invention and is not to be construed as being limiting to other possible embodiments falling within the scope of the attached claim set.

Fig. 1 shows in schematic form the basic structure of an embodiment of the invention.

Fig. 2 shows the systems transmission of information.

Fig. 3 shows the systems "Master Channel Descriptor". Fig. 4 shows the transmitted multiplexed channels from the system.

Fig. 5 shows the structure of the synchronization system.

Fig. 6 shows the partial structure of the synchronization system for file-based channel playback.

Fig. 7 shows the structure of local redundancy.

Fig. 8 shows the structure of the sending unit.

Fig. 9 Shows a schematic presentation of an embodiment of the invention.

Fig. 10 shows a schematic presentation of another embodiment of the invention. Fig. 11 shows a schematic presentation of an example of a Master synchronization system.

Fig. 12 shows a schematic presentation of an example of a Client synchronization system. DETAILED DESCRIPTION OF AN EMBODIMENT

Fig. 1 shows the structure of an embodiment of the invention indicating

distribution of channels. Through B one or more audio and/or video coders, becomes one or more A real-time audio and/or video signals encoded and transmitted to a E standard network device and its D synchronization information. In addition to real-time audio and video channels, the system can also transfer C file-based information, as sound, images, video and text information. For synchronization between the transmitted real-time and file-based channels time and/or locations specified D synchronization information is broadcasted.

Synchronizing information allows execution of real-time or file-based channels based on user's location and/or simultaneous settlement of channels of all users F personal devices. A user's F personal device may be, but is not uniquely defined as being a PDA (Personal Digital Assistant), tablet or a smartphone.

Via network device E, one or more recipients in the form of personal devices F which receive and decode the transmitted channels via a standard wireless network, and present the broadcasted channels for the user. The personal devices F, include a wireless receiver fl, a channel decoded f2, a localization unit f3, a synchronizing unit f4, a display f5, and loudspeakers and/or headphones f6.

Fig. 2 shows the principle of sending out the channels to individual devices. As A, the common transmitting network device A using a standard data network component, such as the use of the IP standard, including for example, IP v4 or IP v6. Network component could be based on the WiFi standard (IEEE 802.11 with variants), WiMax standard (IEEE 802.16 with variants), Bluetooth standard (IEEE 802.15 with variants), and others.

To transmit information from network component in the form of personal receiver unit B to recipients, the IP UDP unicast could be used, or to places with many simultaneous receivers an IP broadcast, IGMP (Internet Group Management Protocol) and/or MLD (Multicast Listener Discovery) multicast protocols for optimization of data network infrastructure, can be used. In the following, a preferred channel structure will be described. Fig. 3 shows the preferred "Master Channel Descriptor", which the system emits. The "Master Channel Descriptor" is shown here with three available channels (A, B, and C), however it is to be understood that there may generally be less or more concurrent channels. "Master Channel Descriptor" is a data stream containing all the information about the broadcast channels as audio, image, video or file data that is output from the system. The personal device starts to decode "Master Channel descriptor" information, and then begin to receive the necessary channels.

As an example, the "Master Channel Descriptor" transmitted includes the following information, as listed in the following non-exhaustive list:

- "Channel ID": Channel reference also contains information about how the personal device should receive the channel.

- "Channel Name": Name of the channel that may be presented for the user. - "Channel Description" Description of the channel as it might be presented for the user.

- "Coverage": Information about the channel's coverage.

- "Registration Server": Specifies the registration server as the personal unit must register on to receive the channel.

- "Channel Type": indicates the channel type, which can be real-time audio, video or file-based content.

- "Language": Indicates the languages used in the current channel. This function is e.g. used in cases where the same channel content is distributed with several different languages, wherein the personal unit automatically, or the users have the option to choose the spoken language of the current channel.

- "Channel Synchronization": Used for signaling channel synchronization for the current channel, including among others the synchronization principle that the channel uses, as well as information about the current channel to be played or stored for later playback. Primarily synchronization in the "Master Channel Descriptor" is used for the deployment of a common time reference, specified in either milliseconds or microseconds, for the synchronization of the time- dependent channels. Furthermore static synchronization information is

transmitted through "Master Channel Descriptor". Fig. 4 shows an example of multiplexed channels in the case where a plurality of channels are broadcasted as multiplexed channels. In general, the individual channels are broadcasted either individually, or as being multiplexed, depending on the current usage scenario. In Fig. 4 three multiplexed channels are outlined as an example. By sending individual channels, this is equivalent to "Channel A" on Fig. 4. By transmitting the channels, the following information, as shown in the following non-exhaustive list could be available:

- "Channel ID": The system's unique reference to the channel

- "Channel Synchronization": Used for signaling channel synchronization for the current channel, including among others the synchronization principle that the channel uses, as well as information about the current channel should be played or stored for later playback. Primarily only dynamic synchronization information is being sent through the channel. Dynamic synchronization information can for example be a sound source, which can move within in the soundstage.

- "Channel Payload": Data information belonging to that channel. The personal unit decodes the information and together with the channel type, which are indicated in the "Master Descriptor Channel", the receiver determines what should be done with the data information. In the following, preferred synchronization schemes will be described.

For the synchronization of personal receivers there will be transmitted the necessary synchronization information from the system. The synchronization system is built with 3D location of the scene and multimedia channels. Fig. 5 shows a preferred architecture of a synchronization system.

Sync data can be transmitted in 2 ways:

- Broadcasting via (A) "Master Channel Descriptor", primarily used for time synchronization of personal devices, and static synchronization information.

- Distribute the synchronization information via the (B) channel, where the synchronization information is to be used. Used primarily for the deployment of dynamic synchronization information.

The system uses the following synchronization information, as specified below: - Scene starting point: which scene is the area where all the channels should be used.

- Scene limits: sets the scene external borders.

- Channel location: indicates the location of a channel in the scene.

- Channel speed : indicates a channel movement in all 3 levels.

- Channel dispersal : indicates the channel 3-dimensional active area.

- Current Time Reference: indicates the system's current reference time.

- Current time accuracy: indicates the receiving time the reference current accuracy, the parameters are calculated from the measured jitter at the receiving time reference signal.

The sync system is structured in the following way: via (A) receives the personal receiver (A) "Master Channel Descriptor" where (B) the synchronization information is decoded if the channel sends synchronization information via "Master Channel Descriptor".

The decoded (B) synchronization information is disclosed to (F) channel synchronization and alignment device.

Through (A) "Master Channel Descriptor" broadcast time adjustment information can be sent out. Information that is being decoded through (C) Time stamp decoder. The decoded time information is being used to control a (D) time integration system, which through (E) the personal recipient's own clock, are making a reference time, which are disclosed to (F) channel synchronization and alignment device.

Synchronizing Information received through (B) the channel receiver is by the (U), sync decoder, disclosed to (F) the channel synchronization and alignment device.

The user's direction can be determined via the user's recipient device (H) built-in compass, gyroscope, the acceleration sensor, and others. Directional information is being disclosed to (K) tracking system. The user's location is determined using a (I) locations positioning system, such as a GPS receiver that is built into the personal receiver.

The locations information is transmitted to (K) tracking system. In order to increase the accuracy of the location of the user, or enabling the determination of the location of the user on places where the normal locations determination systems in the personal receiver, usually do not have the proper functionality, can (J) an assisting system in the personal receiver be used. An (J) assistant system can for example use a built-in gyroscope or WiFi signals can be used to determine the user's location. The information from (J) assistant locations determination system is transmitted to (K) tracking system. All positions towards information and orientation information is transmitted from (K) tracking system, to each channel (F) synchronization and alignment device. (F) The synchronization and alignment device calculates on the basis of, among other things, channel location, diffusion and channel time reference. The (F) synchronization and alignment device calculated information is transmitted to (L) Audio equalizer (M) Audio delay unit (N) Diffusion unit and (O) HTF (Head or Head-related Transfer Function), which is used to generate 3D sound.

Through (B) the channel receiver decodes (G) the audio decoder channel broadcasted audio material. Sound material disclosed to (L) Audio equalizer used for bass compensation and can be used to control high frequencies of channels, which are located far from the user.

Through (M) the audio delay unit, the channel can be time synchronized after for example the channel location and/or synchronized playback on multiple personal devices relative to the location of the channel. (N) The audio trapping device is being used to add reverberant sound on the channel for simulation of user distance to the channel.

To allow the user to hear where the canal is located the channel is sent through a HTF circuit that is used for applying 3D sound information. For leveling, including the simulation of a channel location, a (P) sound attenuation device is being used.

If a channel is containing video or other graphic material, the material is received via (B) the channel receiver and (Q) decoded via a video and/or graphics decoder. Via (R) it is possible, through information from (F) the synchronization and alignment device, to insert delay of the channel for either simultaneous settlement of all personal devices or distance-related settlement of the material. Through (S) the 3D graphics and video processor, it is possible from (F) the synchronization and alignment device to customize display of graphics and video material in relation to the user's distance and location relative to the channel. The (S) 3D graphics and video processor material is presented to the user on (T) the display.

Fig. 6 shows how file-based channels are partially set. The file based channel receiver U stores the receiving information on the personal recipient file system V. Through the synchronization and alignment device F it is possible to start and stop channel player X that can both execute synchronization information, audio, graphics and video information. Information from channel player X is decoded and processed via the synchronization and alignment device F the audio decoder G and the video and/or graphics decoder Q.

In the following, preferred ways of handling registration will be described.

When the user must initiate the receipt of one or more channels, the personal receiver contacts a central server to receive information on how the "Master Channel Descriptor" is being received and receive simultaneously a unique ID. During the registration process, the personal recipient's own serial number and location is saved in a log in the central database. The personal receiver connects to this network and start receiving the "Master Channel Descriptor" and the personal receiving presents possible options for the user. And then starts to receive the different channels.

In the "Master Channel Descriptor" there may be information along with a personal recipient's unique ID. This makes it possible to address a single receiver, where the receiver must respond back when it is addressed. This 2-way

communication form is intended primarily for feedback to the redundancy in the system and to give the user an opportunity to subside polls and give other feedback to the system.

As the system broadcast purposes is primarily built on the principle of point to multipoint, there can be made redundancy in the system.

Redundancy and error correction can occur on one of the following ways: - Protocol embedded in the transmission for each channel.

- Reputation of channel data where the recipient is able to receive the same data several times.

- Local error correction, where the personal receiver communicates with other nearby personal.

Fig. 7 shows one way of carrying out local error correction. In Fig. 7, there is a number of (B), (C) and (D) receivers which receive the same channel information, for example through a (A), WiFi network. Simultaneously, the (B), (C) and (D), local personal receiver is interconnected by a second data channel which is not (A), WiFi, but could be Bluetooth. In the example, one of the receivers (C) loss a single data packet, in this case, (E) packet number 4. The (C) personal receiver sends a request to the other personal receivers nearby, and receives via (D) another personally device a copy of the (F) lost package.

The channel transmission systems can be constructed by several principles, depending on which application the system will be used to.

Fig. 8 shows a principle transmission system. Basically, the component (A) is a computer, e.g. a PC, with the necessary interface units, which represents the coding system. The component (N) is here designated as a standard WiFi access point. The component (A) in this example is composed of the following

components: (B) Is a NMEA interface, which in this example is used for receiving a GPS signal, which is used as reference locations for the system. The GPS signal is also used for the generation of the time synchronization of the system through (I) the system clock.

The DMX input (C) is used as an interface to a system for controlling light effects in a room. The DMX signal (C) is used in this system to synchronize between sound effects in the receiving device, and light effects in the room.

The AES3 (D) interface is used for reception and subsequent transmission of realtime audio that is streamed out to receiver devices via an audio codec (J).

The Management Interface (E) is used for administration of the system, including (G) to store audio files in the system that is periodically distributed to the beneficiaries. The management interface is also used to (H) configuration of the various channels temporal dependencies, and to specify the various channels between virtual or geographical locations. Through (K) Master channel codes, all synchronization and channel information is collected and through (M) the transmission interface is being distributed to beneficiaries through (N) the WiFi access point. All channel content is collected in (L) the Channel encoder and through (M) the transmission interface transmitted to the receiver via (N) the WiFi access point. A number of examples of usage scenarios for the invention are mentioned in the following :

1. Video monitors in public spaces, including for example in public transport. The invention can be used in connection with video monitors for example placed in public places where some people will be bothered by the sound from the monitor. Here it will be possible through the invention for people who want to hear the sound to connect via for example a smartphone, and then hear the sound synchronously, which belongs to the video signal to the monitor on one channel, and use another channel for information such as next stop.

2. Concerts, sound improvement. By using the invention for concerts it will be possible to hear the concert sound in a better quality than the sound from the speakers in the room. Moreover, it would be possible to attenuate sound pressure from the speaker through the invention and thereby reduce noise pollution in the neighborhood of the place of the concert. At the same time it will be possible to create new sound effects, send pictures and video clips to the audience. The invention will also make it possible to let the personal devices blink just like the stage light, and thereby creating new effects.

3. Location-based playback of media material. Another application of the invention can be spatial playback of media material in connection with an attraction.

Simultaneously it will through the invention be possible to run media material simultaneously in a group. This allows a sense of cohesion when everyone in a group is experiencing the same thing at the same time.

4. The experience room. Through the invention it is possible to create a virtual experience space into which multiple virtual sound sources is placed. This will enable external events, such as a video signal to a monitor has an audio channel, that everybody in the experience room can hear synchronously, but where individual virtual audio channels can give every person in the room its own experience.

5. Sound settlement between 2 or more personal devices at the same time. 2 or more personal devices can transfer file-based or block-based multimedia material, such as an MP3 file between the devices. One device sends synchronization information and then all receive units is able to play the exemplary MP3 file synchronously, so that there is no echo between the devices. This provides users of the system simultaneity of the experience.

6. Augmented reality. The invention can be used in an augmented reality environment where a person is equipped with a personal receiver. The person can by pointing the personal receiver in certain directions have the opportunity to receive one or more channels, and possibly composite channels, in relation to a person's geographic location and orientation.

7. Games. The invention can be used with reality games where one or more participants are equipped with their own personal receiver. Through the invention, the various participants can be placed geographically in a virtual sound universe where participants from the same group can hear who is a friend and who is a foe. At the same time it is possible to insert sound effects that are geographical synchronous between all participants, according to their relative positions.

To sum up, the invention provides a system for transmitting one or more simultaneous independent media channels that can be temporally and

geographically synchronized between one or more recipients. A broadcast system transmits one or more media channels simultaneously represented in wireless signals. Recipients, e.g. mobile phones or smartphones, receive the wireless signals and extract the media channels therein. In each of the independent media channels a time stamp and location data is included. Based on the time stamp and location data, e.g. information on location of PA loudspeakers at an event, the recipients can calculate an acoustic delay from the PA loudspeakers and thus, with a known location relative to the loudspeaker, generate an audio signal

synchronized with the acoustic signal received from the PA loudspeakers. Hereby, it is possible to distribute high quality audio synchronized with a live event, e.g. at a concert, utilizing the fact that the audience to such event typically bring mobile phones with headphones. Although the present invention has been described in connection with the specified embodiments, it should not be construed as being in any way limited to the presented examples. The scope of the present invention is set out by the accompanying claim set. In the context of the claims, the terms "comprising" or "comprises" do not exclude other possible elements or steps. Also, the mentioning of references such as "a" or "an" etc. should not be construed as excluding a plurality. The use of reference signs in the claims with respect to elements indicated in the figures shall also not be construed as limiting the scope of the invention. Furthermore, individual features mentioned in different claims, may possibly be advantageously combined, and the mentioning of these features in different claims does not exclude that a combination of features is not possible and advantageous. In the following, another variant of the invention will be described.

Today, many video displays exist in the public spaces and public transportation but, since many of the video displays is an informational or entertaining optional for the persons in the area, many public discussion have raised due to the sound relating to the video material, since some people wants to rest and others wants to be entertained. Till now, the solutions to solve the sound issue have been audio speakers like SoundTube where the audio speaker only covers a small area, leaving only some people in the sound field, and others in areas vulnerable to the sound due to acoustic reflections.

Another way the sound related problem has been attempted to be solved is by the use of low power FM transmitters, where each person has his/her own personal receiver. But solving the problem by use of FM raises new problems, since FM receivers can experience interference due to other nearby FM transmitters, leaving the solution unfit for public transportation due to the high power FM transmitters in the landscape. Another problem with FM is that new cell phones with a built-in FM receiver tend to have a high audio latency, thus leaving the audio signal out of sync with the video signal on the video monitor. Tests have been done to transmit the audio signal by use of digital streaming technologies but, using these technologies it is very difficult to archive both low latency and a rugged transmission scheme for many personal receivers at the same time, and still preserve the quality of the audio signal. Newer transmissions standards say DVB-H, DVB-T lite, DAB and DAB+ is using complex and expensive equipment, still leaving problems with the relative high latency and suspect to radio interference by nearby transmitters.

In essence, this aspect provides a system, where an audio part related or connected to a video event is transferred asynchronously to and stored on a Client device, e.g. a personal device, wherein synchronization information from a Master device is used to trigger the playback on the personal device in order to provide a synchronous playback of audio and video. In a first aspect, this variant of the invention provides a system arranged to distribute an audio content and to facilitate synchronous playback of the audio content at a plurality of associated Client devices, such as personal portable devices, such as mobile phones, wherein the system comprises

- a Master device arranged to transmit synchronization information, such as including reference time information and event information, to the Client devices, such as by a wireless link, such as the Client devices being programmed to store the synchronization information,

- a video device arranged to show a video event on a display, such as the video event being displayed on a public display,

- an audio distributor device arranged to deliver a file based audio content to the Client devices prior to the video device showing the video event on the display, such as by a wireless link, so as to enable storage of the file based audio content at the Client devices, such as the Client devices being programmed to store the file based audio content,

wherein the Client devices are programmed to use the synchronization

information to playback the file based audio content temporally aligned to the showing of the video event, so as to allow users of the Client devices to

experience the file based audio content played back on the Client devices synchronously with the showing of the video event. With such a system, the audio signal can be transferred fail safe to the Client devices, and the audio signal from the Client devices can be played synchronously to the corresponding video event, e.g. a video signal shown on a video monitor. Thus, it is possible to present a video event to users, e.g. show a TV signal or commercials, on a video monitor in public areas without presenting the

corresponding audio signal as a public acoustic signal. Via the Client devices the users can receive the audio content and playback via headphones or the like, so as to not disturb the public area acoustically. Thus, in this way it is possible to present linked video and audio content in buses, trains, at railway stations, in airports, in supermarkets etc.

In a second aspect, this variant of the invention provides a method for distributing an audio content and to facilitate synchronous playback of the audio content at a plurality of associated Client devices, such as personal portable devices such as mobile phones, the method comprising

- distributing a file based audio content to the Client devices, such as by a wireless link, so as to enable storage of the file based audio content at the Client devices,

- showing a visual event, such as showing a video on a public display,

- transmitting synchronization information, such as reference time information and event information, to the Client devices, such as by a wireless link, and

- executing program code in the Client devices involving use of the

synchronization to playback stored file based audio content time aligned to the video event, so as to allow users of the Client devices to experience audio content played back on the Client devices synchronously with the showing of the video event, such as video displayed on a public display.

The same advantages apply to the second aspect as to the first aspect. Especially, it is understood that the system of the first aspect and the method of the second aspect can advantageously be used for at least one of: public video displays without speakers, public movie shows, public artist performances, public announcement systems.

The Invention is described within this section by way of embodiments showing possible implementations, thus also including various options. Fig. 9 shows an embodiment of the system for distributing an audio content and to facilitate synchronous playback of the audio content at a plurality of associated Client devices (CLD), e.g. personal portable devices, such as mobile phones. A Master device MD can transmit synchronization information SI, such as including reference time information and event information, to the Client devices CLD. A video device VD is arranged to show a video event on a display, such as the video event being displayed on a public display. E.g. the video event is temporally controlled by the Master device MD. An audio distributor device ADD is arranged to deliver a file based audio content FAC to the Client devices CLD prior to the video device VD showing the video event on the display, such as by a wireless link. Preferably, the audio content is the sound track corresponding to the video event, but optionally it includes other audio tracks and/or other multimedia material as well. The Client devices CLD are programmed to store the file based audio content in their memory. The Client devices CLD are programmed to use the synchronization information SI to playback the file based audio content FAC temporally aligned to the showing of the video event, so as to allow users of the Client devices CLD to experience the file based audio content played back on the Client devices CLD synchronously with the showing of the video event.

It is understood that the functions of the Master device MD, the audio distributor device ADD, and the video device VD can be implemented in many ways, either as separate device or as one integrated device. Fig. 10 illustrates an example of a basic system setup for a system incorporating a video monitor. The reference signs indicate: A File based video player (including functions of Master device, video device and audio distributor device), B Video monitor, C WiFi network, D WiFi connection, and E Personal device (Client device). A file based video player A is connected to a video monitor B. The file based video player is also connected to a WiFi network C for transmission of synchronization signals, and sharing of audio material. Personal devices E is connected to the WiFi network by use of a wireless connection D. Each of the personal devices E is capable of doing audio playback. By the use of a dedicated protocol, the personal devices E is capable of doing share loading of the audio material, to reduce the processing load on the file based video player.

Fig. 11 illustrates an example of a Master synchronization system. The reference signs indicate: A. Internal clock circuit, B. Reference transmission system, C. State/Event aware system, D. state/event transmission system, and E. Optional compensated reference system.

The Master synchronization system as shown on Figure 3 are divided into two parts, one part is the Master device, say video player, in this setup. The internal clock circuit A generates the reference time signal for the whole. By use of a B transmission system, as say TCP-IP protocol, is the reference time signal transferred to all client devices within the system. A C state/event aware system keeps track of the video playback, inserts a time stamp related to the playback states, and transfers the information's via a D transmission system, as say TCP-IP protocol, to the client devices. If an optional compensated reference time is needed, e.g. to compensate for internal processing delays, an optional

compensated reference sub system E can be inserted. Fig. 12 illustrates an example of a Client synchronization system. The reference signs indicate: A. Reference time receiver, B. Internal clock circuit, C. Reference time extractor, D. Reference time compensator, E. State/event receiver, F.

Scheduler. The receiving, say Client synchronization system, as show on Figure 4, the reference time from the Master device is received by A. The received reference time is aligned to the B internal clock circuit in a C reference time extractor. By deviating the reference time on basics of the clients B internal clock circuit, then the system reference time can always be extracted. To compensate for processing delays introduced within a client device, then a compensated reference time can be extracted from D reference time compensator.

State changes/events from the Master are received by E state/event receiver. When the Client device needs to do e.g. a play function, then the state/event received by E are handled by an F scheduler. The F scheduler takes the deviated reference time extracts the change/event time stamp and waits until the time equals or excesses the time given within the time stamp.

In this section examples of use of the variant of the invention are given :

1) Public video displays without speakers. Video monitors in buses, trains, airplanes and other public places of interest, where a video monitor displays a pre-recorded or near real time video signal. The Invention, then delivers the audio signal to the audience, for a time aligned audio playback synchronized to the displayed video signal.

2) Public movie shows. Public movie shows such as, open air cinema, drive in cinema, both in cinema, and other places of interest. Then Invention delivers the audio signal to the audience, either by using one complete audio track or more audio blocks depending of a transmission scheme, for a time aligned audio playback synchronized to the movie. More audio tracks could be used in this scenario for making it possible for the audience to select specific audio tracks such as languages. 3) Public artist performances. Some public artist performances such as concerts, ballet and theatre use from time to time, a lot of pre-recorded audio material. By use of the Invention, then the pre-recorded material can be distributed to the audience and then synchronized to the event for synchronized audio playback. Audio effects, such as gun shoots, can be distributed as separated audio tracks by the Invention, having a synchronized playback each time the gun is shooting.

4) Public announcement systems. For public announcement system, e.g. used in stadiums, pre-recorded announcement such as fire alarm, safety announcement, exit procedures can be distributed to the audience by the Invention, making it possible to select zones within the area and trigger synchronized messages for the zone. Thereby making it possible to control the crowd, by e.g. triggering a message telling the crowd in zone A that they are allowed to leave the stadium.

Some embodiments E1-E21 of the variant of the inventions are: El. A system arranged to distribute an audio content and to facilitate synchronous playback of the audio content at a plurality of associated Client devices (CLD), such as personal portable devices, such as mobile phones, wherein the system comprises

- a Master device (MD) arranged to transmit synchronization information (SI), such as including reference time information and event information, to the Client devices (CLD),

- a video device (VD) arranged to show a video event on a display, such as the video event being displayed on a public display,

- an audio distributor device (ADD) arranged to deliver a file based audio content (FAC) to the Client devices (CLD) prior to the video device (VD) showing the video event on the display, such as by a wireless link, so as to enable storage of the file based audio content at the Client devices (CLD), such as the Client devices (CLD) being programmed to store the file based audio content,

wherein the Client devices (CLD) are programmed to use the synchronization information to playback the file based audio content (FAC) temporally aligned to the showing of the video event, so as to allow users of the Client devices (CLD) to experience the file based audio content played back on the Client devices (CLD) synchronously with the showing of the video event.

E2. System according to El, wherein the Master device comprises a wireless transmitter arranged to transmit the reference time information and the event information represented in a Radio Frequency signal. E3. System according to El or E2, wherein the audio distributor is arranged to transmit the file based audio content represented in a Radio Frequency signal, and/or the audio distributor is arranged to transmit the file based audio content via the internet. E4. System according to any of E1-E3, wherein the synchronization information comprises reference time information arranged to allow time alignment of an internal clock circuit of the Client devices.

E5. System according to E4, wherein the synchronization information comprises event information allowing the Client devices to start playback of the stored filed based audio content at a predetermined time, so as to allow synchronization of audio playback with the video event.

E6. System according to any of E1-E5, wherein the Client devices comprises at least one of: a Smartphone, a tablet, a Personal Digital Assistant, and a laptop computer.

E7. System according to any of E1-E6, wherein users can, via the Client devices, select different audio tracks of the file based audio content, such as different languages.

E8. System according to any of E1-E7, wherein the Client devices have the possibility to playback different audio tracks of the file based audio content, such as simultaneous, based on transmitted synchronization information from the Master device.

E9. System according to E8, wherein the client device have the possibility to turn off some audio tracks, to help supporting hearing-impaired persons. E10. System according to any of E1-E9, wherein a user can, via a Client device, select specific audio information from the file based audio content, such as including a message when to get off the train.

Ell. System according to any of E1-E10, wherein the Client devices have the possibility to support other multimedia content, such as video content and picture based content.

E12. System according to any of El-Ell, wherein the Client devices have the possibility to select between different audio tracks, based on a location of each individual Client device.

E13. System according to any of E1-E12, wherein a user can, via a Client device, select specific multimedia information, wherein said multimedia information can be triggered from the Master device. E14. System according to any of E1-E13, wherein the Master device and the Client device can playback near real time feeds, such as a received broadcast, where either the Master or another device receives the feed and aligns the feed in blocks for transfer in the file based audio content, within a short period of time.

E15. System according to any of E1-E14, wherein the file based audio content, and optionally also other media files, are transferred asynchronously according to the Client devices, according to information given from the Master device. E16. System according to any of E1-E15, wherein it is possible to add three dimensional audio filtering, such as head-related transfer functions, in order to place one or more audio channels represented in the file based audio content around a user of a Client device. E17. System according to any of E1-E16, wherein at least one of the Clients devices is arranged to archive share media content for sharing to other Client devices connected to the same network.

E18. System according to E17, wherein the Master device acts as the media content origin, and wherein the Client devices take a file transfer load from the audio distributor device.

E19. System according to any of E1-E18, wherein the video device comprises a video monitor arranged to present the video event to a public.

E20. Method for distributing an audio content and to facilitate synchronous playback of the audio content at a plurality of associated Client devices, such as personal portable devices such as mobile phones, the method comprising

- distributing a file based audio content to the Client devices, such as by a wireless link, so as to enable storage of the file based audio content at the Client devices,

- showing a visual event, such as showing a video on a public display,

- transmitting synchronization information, such as reference time information and event information, to the Client devices, such as by a wireless link, and - executing program code in the Client devices involving use of the synchronization to playback stored file based audio content time aligned to the video event, so as to allow users of the Client devices to experience audio content played back on the Client devices synchronously with the showing of the video event, such as video displayed on a public display.

E21. Use of the system or the method according to any of E1-E20, for at least one of: public video displays without speakers, public movie shows, public artist performances, public announcement systems.