Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
CONTENT PRESENTATION METHOD AND APPARATUS
Document Type and Number:
WIPO Patent Application WO/2014/075128
Kind Code:
A1
Abstract:
Apparatus for presenting content, the apparatus including an electronic processing device that determines a location of each of a number of mobile communication devices, determines content to be presented, determines a content portion for each mobile communications device based on the location of the mobile communication device and causes each mobile communication device to present the content portion such that the content is presented collectively by the number of mobile communication devices based on their relative locations.

Inventors:
CASAGRANDE CANDICE (AU)
Application Number:
PCT/AU2013/001270
Publication Date:
May 22, 2014
Filing Date:
November 01, 2013
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ANCHOR INNOVATIONS PTY LTD (AU)
International Classes:
H04W4/02; H04W4/029; H04W4/06; H04W40/20
Foreign References:
US20120263439A12012-10-18
US20100009700A12010-01-14
Attorney, Agent or Firm:
DAVIES COLLISON CAVE (301 Coronation DriveMilton, Queensland 4064, AU)
Download PDF:
Claims:
THE CLAIMS DEFINING THE INVENTION ARE AS FOLLOWS:

1) Apparatus for presenting content, the apparatus including an electronic processing device that:

a) determines a location of each of a number of mobile communication devices;

b) determines content to be presented;

c) determines a content portion for each mobile communications device based on the location of the mobile communication device; and,

d) causes each mobile communication device to present the content portion such that the content is presented collectively by the number of mobile communication devices based on their relative locations.

2) Apparatus according to claim 1 , wherein the content includes at least one of:

a) at least one image;

b) at least one video sequence;

c) vibrations; and,

d) audio information.

3) Apparatus according to claim 1 or claim 2, wherein the content portion includes a region of pixels of an image or video sequence, and wherein the electronic processing device selects the region of pixels based on the location of the mobile communication device.

4) Apparatus according to any one of the claims 1 to 3, wherein the method includes, in the electronic processing device, determining the location of each mobile communication device using an indication of location information received from each mobile communication device.

5) Apparatus according to claim 4, wherein the location information is indicative of a seat or seating area allocated to the user.

6) Apparatus according to claim 4 or claim 5, wherein the location information is determined by the mobile communication device using signals received from the electronic processing device.

7) Apparatus according to any one of the claims 1 to 6, wherein the method includes, in the electronic processing device, determining the location of each mobile communication device using signals received from each mobile communication device.

8) Apparatus according to any one of the claims 1 to 7, wherein the location of the mobile communication device is determined using at least one of time of flight and strength of signals transmitted between the mobile communication device and the electronic processing system.

9) Apparatus according to any one of the claims 1 to 8, wherein the electronic processing device is coupled to a transceiver to allow radio signals to be transmitted to and received from the mobile communication devices.

10) Apparatus according to any one of the claims 1 to 9, wherein the electronic processing device:

a) determines a schedule of content to be presented; and,

b) causes the content to be presented in accordance with the schedule.

11) Apparatus according to any tone of the claims 1 to 10, wherein the electronic processing device transfers a content portion to each mobile communication device when the content portion is to be presented, the mobile communication device presenting the content portion substantially upon receipt.

12) Apparatus according to any one of the claims 1 to 10, wherein the electronic processing device:

a) determines a trigger associated with each content;

b) transfers an indication of the trigger to each mobile communication device, together with the content portion, wherein the mobile communication device is responsive to the trigger to present the content portion.

13) Apparatus according to any one of the claims I to 12, wherein the apparatus includes a mobile communication device that:

a) receives a content portion from an electronic processing device; and,

b) at least one of:

i) presents the content portion; and,

ii) stores the content portion.

14) Apparatus according to claim 13, wherein the mobile communication device presents a content portion as it is received from the electronic processing device.

15) Apparatus according to claim 13 or claim 14, wherein the mobile communication device: a) determines a trigger; and,

b) presents the content portion in accordance with the trigger.

16) Apparatus according to any one of the claims 13 to 15, wherein the mobile communication device: a) determines a location; and,

b) transfers an indication of the location to the electronic processing device.

17) Apparatus according to claim 16, wherein the mobile communication device:

a) determines an indication of a seat allocation; and

b) transfers an indication of the seat allocation to the electronic processing device, the electronic processing device using the seat allocation to determine the location.

18) Apparatus according to claim 17, wherein the mobile communication device determines a seat allocation in accordance with at least one of:

a) input commands from a user;

b) sensing of coded data indicative of a seat allocation;

c) an external sensor; and,

d) communication with a ticketing software application.

19) Apparatus according to any one of the claims 13 to 18, wherein the mobile communication device includes:

a) a transceiver for communicating with the electronic processing device via a communications network;

b) an output for presenting content portions;

c) a processor that:

i) receives a content portion from the transceiver; and,

ii) causes the content to be presented by the output.

20) Apparatus according to claim 1 , wherein the output includes at least one of:

a) a speaker;

b) a vibrating mechanism;

c) an external display controlled by the mobile communications device; and, d) a display.

21) Apparatus according to claim 19 or claim 20, wherein the mobile communication device includes a memory for storing the content portion prior to presentation.

22) Apparatus according to any one of the claims 19 to 21, wherein the mobile communication device includes an input for detecting a trigger and wherein the processor uses detection of the trigger to cause the content to be presented.

23) Apparatus according to claim 22, wherein the input includes a microphone. 24) Apparatus according to any one of the claims 1 to 23, wherein the electronic processing device:

a) causes an interface to be displayed to an operator, the interface including a venue representation;

b) determines an allocation of content to the venue representation in accordance with input commands received from the operator; and,

c) determines content portions for respective locations within the venue using the venue representation and the allocation.

25) Apparatus according to any one of the claims 1 to 24, wherein the electronic processing device is part of a processing system including a memory for storing at least one of: a) content;

b) content portions; and,

c) trigger indications.

26) A method for presenting content, the method including:

a) determining a location of each of a number of mobile communication devices;

b) determining content to be presented;

c) determining a content portion for each mobile communications device based on the location of the mobile communication device; and,

d) causing each mobile communication device to present the content portion such that the content is presented collectively by the number of mobile communication devices based on their relative locations.

27) A method for presenting content, the method including causing each of a number of mobile communication devices to present a respective content portion such that the content is presented collectively by the number of mobile communication devices based on their relative locations.

28) Apparatus for recording content, the apparatus including an electronic processing device that:

a) receives a content portion recorded by each of a number of mobile communications devices;

b) determines a location of each of the number of mobile communication devices; and, c) stores the content portions based on the location of the mobile communication device.

29) A method for recording content, the method including, in an electronic processing device: a) receiving a content portion recorded by each of a number of mobile communications devices;

b) determining a location of each of the number of mobile communication devices; and, c) storing the content portions based on the location of the mobile communication device.

30) A method for recording content, the method including causing each of a number of mobile communication devices to record a respective content portion such that the content is recorded collectively by the number of mobile communication devices based on their relative locations.

Description:
CONTENT PRESENTATION METHOD AND APPARATUS

Background of the Invention

[0001] The present invention relates to a method and apparatus for presenting content, and in one particular example, for presenting content portions on a number of mobile communication devices based on the relative location of the mobile communication devices, so the content is presented collectively across the mobile communication devices.

Description of the Prior Art

[0002] The reference in this specification to any prior publication (or information derived from it), or to any matter which is known, is not, and should not be taken as an acknowledgment or admission or any form of suggestion that the prior publication (or information derived from it) of known matter forms part of the common general knowledge in the field of endeavour to which this specification relates.

[0003] Interaction with audiences is a key element of live entertainment and is common in the art. For example a common audience participation feature is the wave, which involves the audience doing a synchronized wave of their arms around the auditorium so it looks as if a wave has travelled through the audience.

[0004] There is also a growing trend in the number of artists who have been exploring the use of digital technologies to interact with devices used by audience members. An example of this is the use of sound waves outside the normal human hearing experience to communicate with audience devices during a performance.

[0005] The participation and engagement of the audience in events can be of critical importance to performance and the use of sound as a control mechanism, while interesting, is very limited in the level of control and interaction than can be delivered.

Summary of the Present Invention

[0006] In a first broad form the present invention seeks to provide apparatus for presenting content, the apparatus including an electronic processing device that:

a) determines a location of each of a number of mobile communication devices; b) determines content to be presented;

c) determines a content portion for each mobile communications device based on the location of the mobile communication device; and,

d) causes each mobile communication device to present the content portion such that the content is presented collectively by the number of mobile communication devices based on their relative locations.

[0007] Typically the content includes at least one of:

a) at least one image;

b) at least one video sequence;

c) vibrations; and,

d) audio information.

[0008] Typically the content portion includes a region of pixels of an image or video sequence, and wherein the electronic processing device selects the region of pixels based on the location of the mobile communication device.

(0009] Typically the method includes, in the electronic processing device, determining the location of each mobile communication device using an indication of location information received from each mobile communication device.

[0010] Typically the location information is indicative of a seat or seating area allocated to the user.

[0011] Typically the location information is determined by the mobile communication device using signals received from the electronic processing device.

[0012] Typically the method includes, in the electronic processing device, determining the location of each mobile communication device using signals received from each mobile communication device.

[0013] Typically the location of the mobile communication device is determined using at least one of time of flight and strength of signals transmitted between the mobile communication device and the electronic processing system. [0014] Typically the electronic processing device is coupled to a transceiver to allow radio signals to be transmitted to and received from the mobile communication devices.

[0015] Typically the electronic processing device:

a) determines a schedule of content to be presented; and,

b) causes the content to be presented in accordance with the schedule.

[0016] Typically the electronic processing device transfers a content portion to each mobile communication device when the content portion is to be presented, the mobile communication device presenting the content portion substantially upon receipt.

[0017] Typically the electronic processing device:

a) determines a trigger associated with each content;

b) transfers an indication of the trigger to each mobile communication device, together with the content portion, wherein the mobile communication device is responsive to the trigger to present the content portion.

[0018] Typically the apparatus includes a mobile communication device that:

a) receives a content portion from an electronic processing device; and,

b) at least one of:

i) presents the content portion; and,

ii) stores the content portion.

[0019] Typically the mobile communication device presents a content portion as it is received from the electronic processing device.

[0020] Typically the mobile communication device:

a) determines a trigger; and,

b) presents the content portion in accordance with the trigger.

[0021] Typically the mobile communication device:

a) determines a location; and,

b) transfers an indication of the location to the electronic processing device. [0022] Typically the mobile communication device: a) determines an indication of a seat allocation; and

b) transfers an indication of the seat allocation to the electronic processing device, the electronic processing device using the seat allocation to determine the location.

[0023] Typically the mobile communication device determines a seat allocation in accordance with at least one of:

a) input commands from a user;

b) sensing of coded data indicative of a seat allocation;

c) an external sensor; and,

d) communication with a ticketing software application.

[0024] Typically the mobile communication device includes:

a) a transceiver for communicating with the electronic processing device via a communications network;

b) an output for presenting content portions;

c) a processor that:

i) receives a content portion from the transceiver; and,

ii) causes the content to be presented by the output.

[0025] Typically the output includes at least one of:

a) a speaker;

b) a vibrating mechanism;

c) an external display controlled by the mobile communications device; and, d) a display.

[0026] Typically the mobile communication device includes a memory for storing the content portion prior to presentation.

[0027] Typically the mobile communication device includes an input for detecting a trigger and wherein the processor uses detection of the trigger to cause the content to be presented.

[0028] Typically the input includes a microphone.

[0029] Typically the electronic processing device: a) causes an interface to be displayed to an operator, the interface including a venue representation;

b) determines an allocation of content to the venue representation in accordance with input commands received from the operator; and,

c) determines content portions for respective locations within the venue using the venue representation and the allocation.

[0030] Typically the electronic processing device is part of a processing system including a memory for storing at least one of:

a) content;

b) content portions; and,

c) trigger indications.

[0031] In a second broad form the present invention seeks to provide a method for presenting content, the method including:

a) determining a location of each of a number of mobile communication devices;

b) determining content to be presented;

c) determining a content portion for each mobile communications device based on the location of the mobile communication device; and,

d) causing each mobile communication device to present the content portion such that the content is presented collectively by the number of mobile communication devices based on their relative locations.

[0032] In a third broad form the present invention seeks to provide a method for presenting content, the method including causing each of a number of mobile communication devices to present a respective content portion such that the content is presented collectively by the number of mobile communication devices based on their relative locations.

[0033] In a fourth broad form the present invention seeks to provide apparatus for recording content, the apparatus including an electronic processing device that:

a) receives a content portion recorded by each of a number of mobile communications devices;

b) determines a location of each of the number of mobile communication devices; and, c) stores the content portions based on the location of the mobile communication device.

[0034] In a fifth broad form the present invention seeks to provide a method for recording content, the method including, in an electronic processing device:

a) receiving a content portion recorded by each of a number ot mobile communications devices;

b) determining a location of each of the number of mobile communication devices; and, c) storing the content portions based on the location of the mobile communication device.

[0035] In a sixth broad form the present invention seeks to provide a method for recording content, the method including causing each of a number of mobile communication devices to record a respective content portion such that the content is recorded collectively by the number of mobile communication devices based on their relative locations.

Brief Description of the Drawings

[0036] An example of the present invention will now be described with reference to the accompanying drawings, in which: -

[0037] Figure 1 is a flowchart of an example of a method of presenting content;

[0038] Figure 2 is a schematic diagram of an example of a network computer architecture;

[0039] Figure 3 is a schematic diagram of an example of a processing system;

[0040] Figure 4 is a schematic diagram of an example of a mobile communication device;

[0041] Figure 5 is a flowchart of an example of a process for registering a mobile communications device with a venue server;

[0042] Figure 6A is a flowchart of a first specific example method for presenting content;

[0043] Figure 6B is a schematic diagram of an example of image content;

[0044] Figure 6C is a schematic diagram of the content portions formed from the image content of 6B;

[0045] Figure 7 is a flowchart of a second specific example of a method for presenting content;

[0046] Figure 8A is a flowchart of a method of determining a mobile communications device location; [0047] Figure 8B is a schematic diagram of an example of a specific example of apparatus for determining a mobile communications device location;

[0048] Figure 9A is a flowchart of an example of a method for defining content portions; and,

[0049] Figure 9B is a schematic diagram of an example of a user interface for defining content portions.

Detailed Description of the Preferred Embodiments

[0050] An example of a method for presenting content will now be described with a reference to Figure 1.

[0051] In this example, it is assumed that the process is performed at least in part using an electronic processing device, which is in turn in communication with one or more mobile communication devices. The electronic processing device is typically part of a processing system, such as a personal computer, server, mobile communications device, or the like, which is capable of determining the position of the mobile communications devices and causing content to be presented thereon. The mobile communication devices are capable of communicating with the electronic processing device and for presenting content, such as visual, audible, haptic content, or the like, and may include mobile phones, smart phones, tablets, computers, or the like. It will be appreciated from this that the term "mobile communications device" is not intended to be limiting and could apply to any device that is portable, and more typically handheld and which can allow content to be presented. The term venue can refer to any location in which content is presented, and can include indoor and outdoor venues, such as stadiums, arenas, concert venues, parks, outdoor spaces or the like.

[0052] In this example, at step 100 the electronic processing device determines a location of each of a number of mobile communication devices. This may be achieved in any one of a number of manners and can involve having the electronic processing system sense the location of mobile communication devices, for example using signals received from mobile communications devices, or alternatively can involve receiving an indication of location information from the mobile communication devices, as will be described in more detail below.

[0053] At step 110 the electronic processing device determines content to be presented. This may be achieved using any suitable mechanism and can include retrieving content from memory or a database, receiving content from a remote server, selecting content scheduled for presentation, having content provided, selected or defined by an operator, or the like.

[0054] At step 120 the electronic processing device determines a content portion for each mobile communications device based on the location of the mobile communications device. This may be achieved using any suitable manner and could be performed manually based on operator inputs, for example by having an operator define different content portions and then allocate these to particular locations in a venue, or the like. Alternatively, this could be performed based on predetermined, rules, for example which define how, for different types of content, content portions should be defined, and how these content portions should be presented. For example, the rules could specify how images are to be divided into image portions based on a seating plan, so that a respective image portion is provided to each seat. Different rules could then be defined for different content types, as well as for different individual content instances.

[0055] At step 130 the electronic processing device causes each mobile communication device to present the respective content portion such that the content is presented collectively by the number of mobile communication devices based on their relative locations. In one example, this is achieved by having content portions pushed to the mobile communications device from the electronic processing device, although alternatively this can be achieved by having the mobile communication device retrieve the content portion from a remote data store. Presentation of the content portion can occur immediately upon receipt of the content portion or alternatively at a later time for example in accordance with a schedule or other trigger as will be described in more detail below. The manner in which content is presented will depend on the nature of the content and the mobile communications device. In one example, this can include displaying a portion of an image or video sequence, playing audible sounds, vibrating the mobile communications device, or the like. [0056] Accordingly, the above described method can be used to cause each of a number of mobile communication devices to present a respective content portion such that content is presented collectively by the number of mobile communication devices based on their relative locations.

[0057] It will be appreciated that the above described method can be utilised in any environment where a number of people are present and where it is desired to present content collectively across the mobile communication devices, or to display content on a selected one ' of a number of mobile communications devices.

[0058] The content, which could be in the form of an image or video sequence, is effectively divided up between the number of mobile communication devices so that each device displays a respective portion of the image or video sequence. In one particular example the method is utilised in presenting content where a crowd of people are gathered, so that when the crowd members position their mobile communication devices facing in a common direction, such as towards a stage or television camera, the entire image or video sequence can be viewed collectively across the mobile communications devices. This is particularly advantageous in venues where crowds are seated, such as at a music concert, sporting event or the like, as it allows for audience engagement by having the audience collaborate to allow content to be presented. Thus, for example, before a team game a crowd can hold aloft their phones or tablets allowing an image of a club's logo to be displayed across the entire crowd or a part thereof.

[0059] However, the process is not restricted to such environments and alternatively the method could be performed in more general locations such as on a street or the like. An additional feature is that the content portions do not need to be presented concurrently. Instead for example, a virtual area may be defined on a street with content being presented on mobile communications devices as they traverse the area. In this case, the entire content may not be presented until all regions within the area have been traversed. In this example, time lapse photography or similar could be utilised to reconstruct the entire image.

[0060] Additionally, the process can be applied to other forms of content such as audio content, or vibrations. In this instance, different audio content may be presented by different mobile communication devices so that different audio content can be presented in different parts of an auditorium or other venue. In one example, this can be used to allocate different instruments or tracks within a musical composition to different parts of an auditorium, so that for example drums may be presented via mobile communication devices in one part of an arena, whilst guitar and vocals are presented in other parts.

[0061] The process can also be applied to recording of content. For example, it is typical to want to record audience sounds as part of a recording of an event, for example to add atmosphere to live recording of performances, television broadcasts or the like. Accordingly, in another example, the recording ability of a number of communications devices can be used to record a crowd at multiple different locations, with the recorded content being returned to a central server for subsequent use.

[0062] Thus, in this example, the electronic processing device typically receives a content portion recorded by each of a number of mobile communications devices, determines a location of each of the number of mobile communication devices and stores the content portions based on the location of the mobile communication device, so that the content can later be used as required. It will be appreciated that the first two steps, namely receiving the content portions and determining the location of the mobile communications device, could be performed in any order and the example given is not intended to be limiting.

[0063] It will also be appreciated that as each device can be used to display respective content portions, this can be used to push different content, such as personalised notifications to each or selected members of a crowd. For example, this could be used to notify an individual within the audience that they have won a prize, whilst it could also be used to push emergency information to devices based on their location, for example to provide information regarding the nearest emergency exits, or the like.

[0064] It will further be appreciated that by implementing this using an electronic processing device that can cause each mobile communication device to present respective content portions, this allows the control of content presentation to be achieved relatively easily. For example, a single computer system or server could be used to allocate content portions to respective parts or seats within a venue. This in turn allows presentation of content to be - U - dynamically controlled by the operator or the electronic processing device, without necessarily requiring input from different users of the mobile communications devices.

[0065] Alternatively, the electronic processing device could alternatively be part of one or more of the mobile communication devices, so that the process is controlled by one or more mobile communications devices operating in conjunction, without necessarily requiring a central server or similar. In one example, this is achieved by establishing a mesh network or similar between the mobile communication devices, with the mobile communication devices establishing their relative position with respect to the other mobile communication devices in the network, thereby allowing respective content portions to be presented.

[0066] A number of further features will now be described.

[0067] In one example, when the content is an image or video sequence, the content portion includes a region of pixels of image or video sequence, commonly referred to as a "tile", in which case the electronic processing device selects the regions of pixels (the tile) based on the location of the mobile communications device. Thus in this instance, it will be appreciated that the entire pixel content of the image or video sequence can be presented across the number of mobile communications devices, by having each mobile communications device present the tile corresponding to the relative location of the mobile communications device within the overall image or video sequence.

[0068] However, this is not essential and alternatively the regions may alternatively be spaced across the image or video sequence to represent the spacing between the mobile communication devices in use. In this regard it will be understood that if each mobile communication device is being used by a respective person the mobile communication devices will typically be spaced apart when used so that spacing between the regions is required for the image to be more accurately represented.

[0069] In a further variation, the tile could be larger than the actual display of the device, with movement of the device being detected and used to display a different part of the tile, allowing the entire to be displayed only as the mobile communication device is moved over a corresponding area. [0070] It will also be appreciated that the presentation of the content and the content portions presented by each device may vary depending on the nature of the content. For example, the image could simply correspond to coloured regions so that each content portion represents a particular colour. Additionally, whilst each content portion may be unique, alternatively identical content portions can be displayed across several groups of mobile communication devices, for example if these are positioned near to each other or within a common area of a venue, thereby allowing large areas of colour to be displayed. This can also be applied to all mobile communication devices, for example, to allow all of the mobile communications devices to display the same colour.

[0071] The electronic processing device can be used to select content for display and allocate content portions to different locations. This can be performed in accordance, with predetermined rules, so that for example different seats within a venue are automatically assigned to respective pixel regions of an image. In another example this is achieved by having the electronic processing device display an interface to an operator, which allows the operator to manually assigned different content portions to different locations. For example, this could include a venue representation, with the electronic processing device determining an allocation of content to the venue representation in accordance with input commands received from the operator and then determining content portions for respective locations within the venue using the venue representation and allocation. In one particular example, the venue representation can correspond to a seating map or similar with the operator positioning content on the seating map and the electronic processing device then automatically allocating portions of the content to respective seats based on their relative location.

[0072] As mentioned above, the location of each mobile, communication device can be determined in any one of a number of manners. In one example, location information can be determined using signals received from the electronic processing device, for example by triangulating a position of a mobile communication device relative to fixed antennas, for example using signal strength or time of flight of signals received from or by the antennas to determine a position of the mobile communication devices relative to the antennas. In this example, the electronic processing device is typically coupled to one or more transceivers to allow radio signals to be transmitted to and received from the mobile communication devices.

[0073] However, this can require complex infrastructure in order for a location of each of a number of mobile communication devices to be determined and accordingly in a further example, this is achieved using an indication of location information received from each mobile communication device. The location information can be an absolute location, for example determined using a positioning system such as GPS (Global Positioning System) or the like. However, the accuracy of such positioning systems can be limited, especially in environments where GPS signals are not readily received such as indoor arenas or the like.

[0074] Accordingly, in another example, the location information is indicative of a user's expected position within a venue, such as a seat allocation or the like. In this example, the mobile communication device determines an indication of a seat allocation and then transfers an indication of the seat allocation to the electronic processing device. The seat allocation can be determined based on input commands from a user, for example by having the user enter their seat number using an alpha numeric keypad or other similar input, through sensing of coded data indicative of a seat location, such as scanning of a barcode or QR (quick response) code or similar printed on a ticket, or through communication with a ticketing software application, for example, which has received a virtual ticket on the mobile communication device, or using an external sensing device that can detect the position of the mobile communications device. In one example, this could be achieved by having sensors positioned in a venue that can communicate with the mobile communications device over a limited range, allowing a location to be determined based on the position of sensing device. So, for example, each seat could include a short range sensor, such as a NFC (Near Field Communications) sensor that can detect the presence of the mobile communications device using NFC.

[0075] The electronic processing device may determine content to be presented using any suitable technique. In one example, this involves using a schedule, with the electronic processing device determining a schedule of content to be presented and causes the content to be presented in accordance with the schedule. Thus,' for example, the schedule could correspond to timing information associated with a performance such as a run sheet or the like. This allows the electronic processing device to synchronise the presentation of content with a performance and/or with other events such as when competing teams enter a sporting venue. Alternatively, this may be achieved manually, in accordance with user and/or operator input commands, or through a combination of such mechanisms.

[0076] In one example, the electronic processing device transfers a content portion to be presented to each mobile communications device when the content portion is to be. presented, so that the mobile communication device presents the content portion substantially upon receipt.

[0077] Additionally and/or alternatively however, the electronic processing device can determine a trigger associated with each piece of content and then transfer an indication of the trigger to each mobile communication device together with the content portion so that the mobile communication device can be responsive to the trigger to present the content portion. This can be used by 'preloading' content portions onto mobile communication devices thereby obviating the need to transfer a large amount of data to a number of communication devices substantially simultaneously. This is particularly useful if only limited network bandwidth is available, in which case it may be difficult to upload content portions to each of the number of mobile communications devices at the same time, thereby rendering the presentation of the content more problematic. To avoid this, the content portion is preloaded, and each content portion can then be displayed in response to a trigger.

[0078] The trigger can be selected for the particular content being presented so that a single trigger can be used to cause the presentation of content on each of the mobile communications devices simultaneously.

[0079] The nature of the trigger may vary depending upon the preferred implementation, in one example this can simply be a signal transmitted to the mobile communication device from the electronic processing system. Alternatively however the trigger may be an external trigger that can be detected by the mobile communication device. For example, this could correspond to a particular chord sequence in a musician's performance with the microphone of the mobile communications devices being used to detect when the particular chord sequence is played. This allows the mobile communication device to then cause the content to be presented at the appropriate time. Thus the mobile communication device can be used to determine the trigger and present the content portion in response to the trigger.

[0080] It will be appreciated that the mobile communication device typically includes a transceiver for communicating with the electronic processing device via a communications network, an output such as a speaker, mechanical vibration device, or display for presenting content portions and a processer that receives a content portion from the transceiver and causes the content to be presented by the output. The mobile communication device may also include a memory for storing the content portion prior to presentation as well as an input for determining a trigger which can be used to cause the content to be presented. Thus, the mobile communications device is typically in the form of a mobile phone, smart phone, tablet, or the like.

[0081] In one example, the mobile communications device could take the form of custom hardware. For example, the mobile communications device could be a wearable device, such as a bracelet (similar to a Xyloband™), or the like, including a number of different coloured light sources, such as LEDs (Light Emitting Diodes), which can be selectively activated based on the location of the wearer. This allows different lights to be displayed by different bracelets, so that different regions of the audience may be used to display different colours. It will be appreciated that such functionality can be implemented using relatively basic hardware, and hence relatively cheaply. This allows individuals, such as audience members, to be provided with the hardware, thereby avoiding the need for them to provide their own mobile communication device, enabled with a suitable applications software.

[0082] In a further example, the mobile communications device can be adapted to cause information to be displayed on an external display. This could include a display in the form of illumination sources such as LEDs, (Light Emitting Diodes) provided in sticker or other hardware, such as a bracelet. In this instance, the mobile communications device can be used to control the illumination sources to provided different coloured lighting.

[0083] In one example, the process is performed by one or more processing systems operating as part of a distributed architecture, an example of which will now be described with reference to Figure 2. [0084] In this example, a base station 201 is coupled via a communications network, such as the Internet 202, and/or a number of local area networks (LANs) 204, to a number of mobile communications devices 203. It will be appreciated that the configuration of the networks 202, 204 are for the purpose of example only, and in practice the base station 201 and mobile communications devices 203 can communicate via any appropriate mechanism, such as via wired or wireless connections, including, but not limited to mobile networks, private networks, such as an 802.11 networks, the Internet, LANs, WANs, or the like, as well as via direct or point-to-point connections, such as Bluetooth, or the like.

[0085] In one example, the base station 201 includes a processing system 210 coupled to a database 211. The base station 201 is adapted to be used in providing content to the mobile communications devices 203 and performing other related operations, including defining content portions, identifying mobile device locations and optionally scheduling the content to be presented. The mobile communications devices 203 are similarly adapted to communicate with the base station 201, allowing content portions to be received and then presented, as well as performing other required operations, such as determining a seat allocation, when needed.

[0086] Whilst the base station 201 is shown as a single entity, it will be appreciated that the base station 201 can be distributed over a number of geographically separate locations, for example by using processing systems 210 and/or databases 211 that are provided as part of a cloud based environment. However, the above described arrangement is not essential and other suitable configurations could be used.

[0087] An example of a suitable processing system 210 is shown in Figure 3. In this example, the processing system 210 includes at least one microprocessor 300, a memory 301, an optional input/output device 302, such as a keyboard and/or display or touch screen, and an external interface 303, interconnected via a bus 304 as shown. In this example the external interface 303 can be utilised for connecting the processing system 210 to peripheral devices, such as a router for onward connection to the communications networks 202, 204, databases 21 1, other storage devices, or the like. Although a single external interface 303 is shown, this is for the purpose of example only, and in practice multiple interfaces using various methods (eg. Ethernet, serial, USB, wireless or the like) may be provided. [0088] In use, the microprocessor 300 executes instructions in the form of applications software stored in the memory 301 to allow the method of presenting content to be performed, as well as to perform any other required processes, such as communicating with the mobile communications devices 203. The applications software may include one or more software modules, and may be executed in a suitable execution environment, such as an operating system environment, or the like.

[0089] Accordingly, it will be appreciated that the processing system 210 may be formed from any suitable processing system, such as a suitably programmed computer system, PC, web server, network server, or the like. In one particular example, the processing system 210 is a standard processing system such as a 32-bit or 64-bit Intel Architecture based processing system, which executes software applications stored on non-volatile (e.g., hard disk) storage, although this is not essential. However, it will also be understood that the processing system could be any electronic processing device such as a microprocessor, microchip processor, logic gate configuration, firmware optionally associated with implementing logic such as an FPGA (Field Programmable Gate Array), or any other electronic device, system or arrangement.

[0090] As shown in Figure 4, in one example, the mobile communications device 203 includes at least one microprocessor 400, a memory 401 , an input/output device 402, such as a keyboard and/or display, and an external interface 403 such as a radio transceiver, interconnected via a bus 404 as shown. In this example the external interface 403 can be utilised for connecting the mobile communications device 203 to the communications networks 202, 204. Although a single external interface 403 is shown, this is for the purpose of example only, and in practice multiple interfaces using various methods (eg. Ethernet, serial, USB, wireless or the like) may be provided.

[0091] In use, the microprocessor 400 executes instructions in the form of applications software stored in the memory 401 to allow communication with the base station 201 , for example to allow content to be received therefrom arid to allow the content to be presented.

[0092] Accordingly, it will be appreciated that the mobile communications devices 203 may be formed from any suitable device, such as a smart phone, tablet, network enabled media player, laptop, or the like. Thus, in one example, the processing system 210 is a standard smart phone, or tablet executing software applications stored on non-volatile storage, although this is not essential. However, it will also be understood that the mobile communications devices 203 can be any electronic processing device such as a microprocessor, microchip processor, logic gate configuration, firmware optionally associated with implementing logic such as an FPGA (Field Programmable Gate Array), or any other electronic device, system or arrangement.

[0093] Examples of the method of presenting content will now be described in further detail. For the purpose of these examples, it is assumed that the processing system 210 determines the content portions and transfers these to the mobile communication devices 203. The processing system 210 is therefore typically a server which communicates with the mobile communications device 203 via a communications network, or the like, depending on the particular network infrastructure available.

[0094] To achieve this the processing system 210 of the base station 201 typically executes applications software for performing the content presentation method, with actions performed by the processing system 210 being performed by the processor 300 in accordance with instructions stored as applications software in the memory 301 and/or input commands received from a user via the I/O device 302, or commands received from the mobile communications device 203.

[0095] It will also be assumed that mobile communications device 203 executes applications software, such as an "App", allowing communication with the processing system 210 and allowing content to be presented. Actions performed by the mobile communications device 203 are typically performed by the processor 400 in accordance with instructions stored as applications software in the memory 401 and/or input commands received from a user via the I/O device 402.

[0096] However, it will be appreciated that the above described configuration assumed for the purpose of the following examples is not essential, and numerous other configurations may be used. It will also be appreciated that the partitioning of functionality between the mobile communications devices 203, and the base station 201 may vary, depending on the particular implementation.

|0097] For the purpose of this example, the base station 201 will be referred to as a venue server, which is used to perform the content presentation method for a venue, such as a auditorium for a concert, whilst it is further assumed that the mobile communications device is either a Smartphone or Tablet which is used by audience members attending the performance. However again, this not intended to be limiting and is for illustrative purposes.

[0098] An example of the process of registering a mobile communication device 203 for use as part of the process will now be described with reference to Figure 5.

[0099] In this example, at step 500 a user activates an App on the mobile communications device. This maybe performed at any time such as prior to attending a performance or may occur at the venue depending on the preferred implementation. In one example, in advance of attending a venue the user can be sent a link, such as a URL (Universal Resource Locator) allowing them to download and activate the App prior to attending the performance. This can be performed for example as part of an email or other message confirming the user's attendance at the performance.

[0100] At step 510 the App optionally determines a user's allocated seat in the venue. This may be achieved in any one of the number of manners and can involve for example having the user enter seating information in response to a prompt displayed by the App. Alternatively, the user may be presented with a seating plan of the venue and allowed to select their seat. In the event that access to the App is provided when the user books a ticket for a particular event, for example as a link in a booking confirmation, the link can include an indication of the allocated seat so that when the user selects a link to open the App, the App automatically is provided with the seat allocation.

[0101] At step 520 the App of the mobile communications device 203 transfers a device identifier and optionally an indication of the allocated seat to the venue server 201. The device identifier may be any identifier that can be used to uniquely identify the mobile communications device 203, allowing content portions to be provided to the correct mobile communications device 203 by the venue server 201. The device identifier can be assigned dynamically by the App and/or venue server 201, and could include an IP (Internet Protocol) address or similar, or alternatively may be part of the mobile communications device hardware, such as a MAC (Media Access Control) address or the like.

[0102] At step 530, the venue server 201 records the device identifier of the mobile communications device 203 and optionally determines the location of the mobile communications device. This may vary depending upon the preferred implementation, for example if the location is simply based on the seat allocation, and a seat allocation was determined by the App at 510, then the venue server can simply record an association between the mobile communications device identifier and the seat allocation.

[0103] Alternatively, if the location is based on the physical location of the mobile communications device 203 within the venue, for example based on signals transferred between the venue server 201 and the mobile communications device 203, this can be performed as soon as the mobile communications device enters the venue, or at, another predetermined time, such as the start of the performance and may optionally be updated as the mobile communications device moves within the venue. An example of the manner in which this can be achieved will be described in more detail below with reference to Figure 8.

[0104] In any event, it will be appreciated that the registration process is substantially automated, and in one example all that is required from the user of the mobile communications device is to activate or download the relevant App, and potentially allow certain permissions, such as to grant network access, allowing the App to then automatically register with the venue server 201 and furthermore present content.

[0105] An example of the process of presenting content will now be described with reference to Figure 6A.

[0106] In this example, at step 600 the venue server 201 determines content to be presented, for example based on input commands from an operator or alternatively based on the predetermined schedule or similar. At step 610 the venue server determines content portions for different locations within the venue. This may correspond to the locations of the mobile communications devices or may simply involve allocating different content portions to different seats within the venue. This can either be performed dynamically during this process or during a content configuration process, as will be described in more detail below with reference to Figure 9A.

[0107] At step 620, the venue server 201 determines the location of each mobile communications device 203. This may -be achieved based on the seat allocation of each mobile communications device 203, assuming this has been provided or otherwise determined, or may be achieved by detecting the location of mobile communications devices as will be described in more detail below with reference to Figure 8.

[0108] Having determined the location of each mobile communications device the venue server 201 determines a content portion to be provided to each mobile communications device at step 630. Thus, the venue server 201 will use the different content portions determined at step 610 and the current mobile communications device location, in order to allocate a particular content portion to each mobile communications device 203.

[0109] At step 640 the venue server transmits the content portion to each mobile communications device which then presents the content at step 650.

[0110] Thus, from the perspective of the mobile communications device 203, the mobile communications device 203 will receive content pushed to it from the venue server 201, which is typically in communication with the App installed and running on the mobile communications device 203. The App causes the received content portion to be displayed, for example, by displaying an image, a colour, a glow or single shade of illumination on the screen, by causing sound to output from a speaker, or by vibrating the mobile communications device 203.

[0111] An example of the manner in which content portions from an image may be displayed will now be described with reference to Figure 6B and 6C.

[0112] In particular, Figure 6B shows an image divided into nine tiles, as shown by the dotted lines, whilst Figure 6C shows how each tile can be displayed by a respective communications device 203. In this example, it will be appreciated that by arranging the nine mobile communications devices 203 in a three by three grid, the image can be apportioned and transmitted as respective tiles to each mobile communications device, which can then operate to display the image collectively. Whilst the above referenced example assumes the mobile communications devices 203 are positioned substantially in abutment, this is for illustration only, and more typically the mobile communications devices 203 will be spaced apart and held by individual members of a crowd or gathering. Despite this, the image can still be perceived across all devices.

[0113] A second example of a method of presenting content will now be described with reference to Figure 7. In this example triggers are used to cause content to be presented allowing content portions to be preloaded onto mobile communications devices 203.

[0114] In this example, as in the previous example, at step 700 the venue server 201 initially determines the next content instance to be delivered, and then determines a content portion for different locations within the venue, such as for different seats, at step 710. These steps as substantially described above will not be described in further detail.

[0115] In this example however the venue server 201 defines a trigger for the content at step 720. The trigger is a respective trigger that is associated with that particular piece of content or content instance, so that the same trigger can be used for all content portions associated with a particular content instance.

[0116] At step 730, the venue server 201 determines whether all content instances are completed and if not, returns to step 700, otherwise the venue server transmits the content portions and corresponding indications of the triggers to each mobile communications device 203 at step 740.

[0117] At step 750 the mobile communications device detects a trigger and uses the trigger to present the corresponding content portion at step 760. Thus, in this example, the App will be adapted to monitor for potential triggers and then compare these to a list of triggers stored in memory, which are associated with respective content portions. The respective content portion is then presented by the mobile communications device 203.

[0118] As previously described, the triggers can take on any one of the number of forms and can include external triggers, such as detected sounds, or alternatively, can include signals transmitted by the venue server, or user inputs provided by a user. Accordingly, it would be appreciated that the above described methods allow content portions to be provided to respective mobile communications devices and then displayed as required either immediately or upon a suitable trigger occurring.

[0119] Accordingly, the above described process can be performed whilst requiring only minimal input from the user. In particular, all they are required to do is activate the App and optionally provide a seat allocation, for example by entering a number or scanning coded data in the form of a barcode, QR code or similar. In this instance, the App can cause the mobile communications device 203 to automatically connect to the venue server 201, typically via Wi-Fi or another local area network, before receiving and presenting content portions as required. In this regard, user's may be prompted during a performance to hold their mobile phones up facing a predetermined direction such a camera or stage with content then automatically displaying without user input. This allows content to be displayed across a large number of mobile communications devices coherently without requiring actions to be performed by users.

[0120] It will be appreciated that prompts may be provided to users in any one of the number of ways for example, prompts could be provided by performers indicating to the audience they should hold their phones aloft or alternatively, could be achieved by pushing content to the mobile communications devices. Thus content portions could be in the form of audible, visible instructions, or haptic instructions (such as vibration) letting the audience know when to hold their phones up.

[0121] An example process for automatically determining location of mobile communications devices will reference to Figures 8A and 8B.

[0122] In this example, the venue server 201 is connected to a number of antennae 801, which allow for wireless transmission with the mobile communications devices 203. At step 800 the venue server 201 selects a next mobile communications device 203 and then polls the mobile communications device 203 via number of antennae 801, using the mobile identified during registration described above with respect to Figure 5.

[0123] At step 820 the mobile communications device 203 responds to the venue server 201, allowing the venue server 201 to determine the location of the mobile communications device 203 based on the received signals. Thus the venue server 201 can cause the mobile communications device 203 to transmit a signal which is received by each of a number of antennae 801. The relative strength of the signals or the time of flight of the signals can then be used by the venue server 201 to triangulate the position of the mobile communications device 203.

[0124] It will be appreciated that this process may be performed once for example to determine a seat allocation of the user once the user has sat down. Alternatively this process can be performed periodically or/and substantially continuously, so that movement of mobile communications devices 203 around a venue can be detected. In this instance, if an individual moves seat during a performance, or for example if individuals are not seated but are walking around within an area this can allow the content portion displayed by the mobile communications device to reflect the current location.

[0125] Whilst the above described process has been described as performed by the venue server, it will be appreciated that alternatively this could be performed by the mobile communications device 203, for example by having the mobile communications devices triangulate their own position based on signals received from the venue server 201 via the antennae.

[0126] Additionally, and/or alternatively, the mobile communications device 203 can utilise inbuilt sensors, such as accelerometers, gyroscopes, or the like, in order to update a location of the mobile communications device 203. For example the mobile communications device 203 can automatically detect a reference position in the venue utilising a technique similar to that described in above with respect to Figure 8. Sensors within the mobile communications device can then be used to monitor movement of the mobile communications device from the reference position with this information being transmitted to the venue server as required in order to allow the content portion to display by the mobile communications device to be updated.

[0127] In a further variation, mobile communications devices 203 can be provided content portions that correspond to a tile having an area larger than the physical display area of the mobile communications device 203. For example, a mobile communications device may only have a screen of a few square centimetres whereas an image of a square metre can be provided to the mobile communications device 203. In this instance, the App uses sensors within the mobile communications device 203 to detect movement of the mobile communications device 203 relative to the reference position which can correspond to a centre of the image.

[0128] As the mobile communications device 203 is moved, the particular part of the content portion i.e. a particular square centimetre section of the square metre image can be adjusted by the mobile communications device 203 without requiring additional content portions to be received from the venue server 201. This can allow an individual to move the mobile over a small area relative to the position without requiring additional content portions to be uploaded. Thus, an individual can wave their mobile phone in the air with the presented image constantly being adjusted to reflect the position of the phone so that the content is still presented consistently across the number of mobile communications devices.

[0129] An example process for allowing the control of content by an operator will now be described with reference to Figures 9A and 9B.

[0130] In this example, at step 900 the venue server 201 displays an interface to an operator. In this regard, this could be on a display of the venue server, or via a remote computer system, for example by having the venue server host a webpage, for display by a browser of the remote computer system.

[0131] An example interface is shown in Figure 9B, which includes a venue map window 951, showing a venue representation, for example in the form of a seating plan divided into regions PI, P2, P3, P4, P5. The interface further includes a control window 952, including inputs for allowing various venue settings to be adjusted, a source media window 953 for displaying selected content, a library window 954 for allowing content to be selected, and a storyboard/playlist window 955 for scheduling content for presentation.

[0132] In this example, at step 910 the operator can select content for presentation, for example by selecting content from the library using the library window 954 and viewing this in the source media window 953 to ensure the content is correct. Alternatively, content may be predefined and stored in a schedule such as a playlist or the like, in which case a next piece of content to be presented, may be displayed to the operator.

[0133] Once the correct content has been selected, the operator allocates the content to the venue representation at step 920. In one example, the content is allocated to a particular region in the venue using any suitable technique such as by dragging the content onto a respective region PI, P2, P3, P4, P5 on the seating plan map. This typically causes the content to be imported into the storyboard window, allowing the operator to review the content that has been scheduled for presentation. The venue server 201 then determines content portions corresponding to each seat, or standing position, in a particular area using the venue representation at step 930.

[0134] Thus, the venue server can implement a control panel, which is a 'behind the scenes' software program to control the delivery of activation commands to individuals devices based on their location in order to achieve a desired overall presentation of content consolidated across the mobile communication devices.

[0135] Accordingly, the manner in which content portions are assigned to create an overall presentation of content (referred generally as the "Overall Display") can be achieved using basic software tools implemented as part of the control panel program, and this can use basic control tools similar to those found in document creation software applications.

[0136] The control panel typically defines perimeters of the main crowd area/s, located and pre-programmed into the software as part of a venue map, seating plan or the like. In the control panel a designer can playback a virtual example of what the Overall Display would appear like, to facilitate correct presentation of content. Venues may have multiple sections, shown in Figure 9B, such as a centre floor (P4), centre back tier (P2), left and right tier (PI & P3) and upper mezzanine level (P5) seating. In the Control Panel the designer can select to display a seating section or overall venue map in the venue map window 951, allowing block of colours, detailed images, and moving images to be prepared in other available programs such as Photoshop, Adobe, Final Cut Pro, iMovie, iPhoto etc, and imported into the control panel and further edited, before being assigned to particular regions or seats within the venue, as described. [0137J Accordingly, the above described method and apparatus can be used to allow a group of people, such as an audience, that may be seated or standing together, to use their own mobile communications device to participate in the presentation of content. In one example, a variety of functions of each person's device can be used in displaying content, such as screen, inbuilt flash light, vibration, audio recording and play back functions, allowing these to be activated in a co-ordinated manner so that content can be presented based on the locations of the individual's mobile communications devices.

[0138] In one example, this is performed using a central server or the like to control the presentation of multiple mobile communications devices centrally, for example via a wireless network. The central server communicates with a smart phone application or the like, which offers an interactive media platform for entertainment events. It will be appreciated from this that the process can be performed using any hand held device with a screen - most commonly smart phones - but also including larger portable personal devices like tablets, ipads and even personal computers.

[0139] The system uses individuals' devices collectively to achieve an enhanced experience for the audience by engaging them and placing entertainment value on their individual participation in a fresh way. Based on location, each device's screen light becomes like a pixel and the audience collectively becomes like a screen. Entertainers can choose from a range of different engaging options, which adopt some or all available smart phones functionalities like screen light, microphone, sound and vibration. The size of entertainment venues where the idea can be used is unlimited, including home and private venue setups and not limited to geographical locations around the world and in outer space.

[0140] The above described process can use a variety of location sensing technologies, including indoor and outdoor sensing mechanisms, as well as manually entered location information, to pin-point the location of mobile communication devices to sufficient accuracy to allow the content to be correctly presented.

[0141] In one example, a group of smart devices 203, usually smart phones, in the same proximity can be in the one location at an event such as a music event. The group of devices comprise individual devices 203 that each have their own display or other content presentation mechanism. Each mobile communication device has the ability to understand its relative location relative to the other devices in the same proximity by the use of triangulation using a wireless network, or other similar technique.

[0142] In one example scenario a number of antennae, such as four, can be used for triangulation. These are used by each individual device 203 to understand the relative location of the device within the group of devices, with the relative location of each device 203 being determined by monitoring the strength or time-of-flight of each Wi-Fi antenna signal. For example if the signal or time-of-flight from one antenna is stronger or quicker than another antenna, the relative strengths and times can be used to determine at least one axis of position within the event auditorium. As more antennas are monitored the ability to determine a relatively accurate location within the auditorium can be achieved.

[0143] In another example, the location could be determined using imaging and image analysis techniques. For example, individuals may be asked to capture images of one or more different objects using an image sensor of the mobile communications device, such as a built in camera. The position of the camera, and hence mobile communications device, can then be derived based on information from the image/s, include for example, the size and relative angle of the object/s in the image.

[0144] The ability of the device 203 to understand its location within an event auditorium or facility allows it to accomplish a number of collaborative capabilities. For example, in the example of Figures 6B and 6C, an image can be broken up into components where a small component of the image can be displayed on each of the displays 203 to enable ten, a hundred or even a thousand people to participate in the display of the image on a large scale. This technique can be used in such ways to synchronize with things happening at the event such as the frequency of the music, a beat, a marriage proposal, branding or the lyrics of a song being sung.

[0145] As shown in Figure 8B, one of the devices 203 in a group can use it's own network antenna and four other antennae to be able to work out its relative position to all the other devices 203 in it's proximity. It can then display its portion of the overall image in such a way so that the combined effect of all of the devices being used in the same way will show an image that collaborates with the controller of the event.

[0146] As a result the audience could participate at synchronized times during the event in displaying graphical elements that can be seen from other places around the auditorium or captured on video and shown back to the audience on the events video screens if present and part of the performance.

[0147] It will be appreciated that the technique can be used with any number of devices 203, and could include one or more devices being used in a coordinated fashion to display graphical, visual or audible elements.

[0148] The example embodiment discloses the display of an individual painting, for example as shown in Figures 6B and 6C, an alternative embodiment could use any graphical or other output that the device is capable of including but not limited to real time video and coordinated audio and includes anything that the device is capable of displaying or output in a coordinated fashion.

[0149] Whilst the above example uses triangulation based on WiFi antennas and network strength detection by the device. Alternative examples could use any form of location detection that can be achieved with standard smart device capabilities. These would include but not be limited to any type or application of a wireless network or networks, the use of a mesh network, or the use of visual or audio based location detection systems utilizing common smart device capabilities such as the camera or the microphone.

[0150] Furthermore, in early development stages where location technology accuracy is limited, simple location detection may include people inputting their seat numbers or seating section into the software application on their devices. Based on the inputted information their devices will be sent (via wireless network) commands accordingly to achieve a simple Overall Display.

[0151] In later stages of development when more precise location technologies become available peoples devices will be located automatically and sent (via wireless network) commands relevant to their floating position to achieve a more detailed and dynamic overall display.

[0152] When the technique is applied to image content, in its simplest form, depending on the location of each person's mobile communication device, their device will be sent a command to light up the screen (or any other available visual projection function that the device is capable of ie. flash light), producing colour or a light to shine at a certain time. Each device acts like a pixel in the overall display. Blocks of light or colour can then be assigned to different regions in a yenue to produce a multi-region image, for example having a checker board appearance, colour shades and varying brightness of light or the like. For example, in the above described venue shown in the venue map window 951 above, the different sections could have different colours, such as PI = blue, P2 = red, P3 = green, P4 = yellow, P5 = violet.

[0153] However, more complex images could be produced, such as a logo of a band or the like. This can also include detailed images like the words "GOAL", "Marry Me?" could be seen from the Overall Display.

( [0154] In a further example, the screens of individual devices can also flash on and off in any desired colour/frequency/intensity pattern required eg. random flashing of colours. Used in a more coordinated way flashing of shading and brightness of colour becomes something like frames per second and can create what looks like a drop in water and it's ripple effect.

[0155] A group of people or a singular person can be nominated in the audience using the visuals of the overall display eg. a singular light or a block of colour may roam the audience and randomly land on a user or a group of people in tiered seating. Or by the entertainer simply selecting something like a code, ticket or seat number and the light roaming and finding them.

[0156] Video sequences, such as live footage can also be filmed on a camera and the visual information is instantly processed and the relevant pixels distributed to individual's mobile communication devices (relevant to their location and the scale of the display) for the total effect of the overall display being a representation of the live footage in real time. This is not limited to one camera or being displayed on one section of the overall display. [0157] The presentation of content can also include audible content. In one example, this can be used to present a single audio source, which can be presented over the entire audience or selected area of an audience. This can include, for example, theatre actors voices amplified over entire audience or just the far back section.

[0158] Similarly, multiple audio sources can be presented over the entire audience or a selected area of an audience per source. For example the lead singer, keys and bass each played back live from sections P4, PI, P3 of Fig 9B respectively.

[0159] Single or multiple devices can be used as sound recording devices, using a microphone as an audio source, which can then be recorded and/or played back as desired.

[0160] In one example, a single audience member can be chosen and the microphone on their phone becomes activated. They may be asked to sing along with the band, ask questions, or otherwise interact with presenters, with their voice being heard from the main speakers or another nominated external speaker.

[0161] Single/multiple mobile communication device microphone/s can be used as audio source to be presented over entire audience or selected area of audience per source, so for example, two or more selected audience members may be asked to have a sing off and their voice will be amplified from the section they are sitting in.

[0162] The microphone functionality can also record the audio of the voice or sounds of nominated audience member/s live, where the audio recorded from the device/s can be used by the entertainer, and transformed by the entertainer if desired, and incorporated into the live audio production of the show. For example, an artist or comedian may want to write a new song live on the night and getting words or ideas from the audience and using them in the audio creation and live performance.

[0163] Adhering to copyright laws, the artist may want to record the new song created using audience interaction and sell it to raise money for a charity. Audience members may purchase the track as a memento from the experience of the night or be able to download it for free, in which case the App could provide an "Allow'V'Permission" prompt before a selected audience member participates. [0164] It will be appreciated that the vibration functions on mobile phones can be used in a manner similar to that described above.

[0165] Different types of content can also be presented individually and/or in conjunction, so that for example different sounds could be associated with different colours, depending on the desired overall content presentation.

[0166] It will be appreciated that the ability to display information dynamically based on relative locations of devices can be used in a wide range of manners.

[0167] For example, this allows performers to interact directly with the displays, using an appropriate input technique. Thus, a position sensor could be used to determine where a performer is pointings allowing the performer to influence the image at that location. So, a performer such as an acrobat hanging upside down and spinning can point to the audience with the mobile communication devices of audience members being pointed at light up accordingly. This could be achieved by using a wristband worn by the performer having a position and orientation sensor, allowing the venue server to calculate where the performer is pointing in the venue, in turn allowing the mobile devices to be controlled accordingly. In another example, a performer can blow a kiss to section P3 of the venue seating, causing their phones to vibrate. This could be an act and performed in time with a song, meaning that timing is pre-programmed into the app but appears to be in relationship with the performer's command.

[0168] The system can also be used as an advertising platform. In this regard, the wide ranging functionalities entices a large number of people to download the App and therefore occupies a large quantity of mobile communication device real estate making it a fantastic advertising platform. Advertisers or sponsors of events can choose to have their brand as a larger display over the audience or as a banner strip / colour indent on individuals screen light during an overall display or a little banner notification which is standard in mobile app advertising.

[0169] The system can also be used to give away prizes, increase brand awareness and . powerfully promote products. The advertising can also enhance the experience of each user if there is the opportunity to get products and vouchers for free. [0170] Throughout this specification and claims which follow, unless the context requires otherwise, the word "comprise", and variations such as "comprises" or "comprising", will be understood to imply the inclusion of a stated integer or group of integers or steps but not the exclusion of any other integer or group of integers.

[0171] Persons skilled in the art will appreciate that numerous variations and modifications will become apparent. All such variations and modifications which become apparent to persons skilled in the art, should be considered to fall within the spirit and scope that the invention broadly appearing before described.