Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
PAIRED DEVICES
Document Type and Number:
WIPO Patent Application WO/2017/144408
Kind Code:
A1
Abstract:
A first device for rendering an effect in at least a region of an environment, the device configured to: determine a supplementary environmental effect, one of the effect and the supplementary effect being a lighting effect; determine a second device, at least one of the first and second devices being a portable device; detect a current proximity of the second device to the region of the environment; evaluate whether the current proximity of the second device is within a threshold proximity; control a transducer of the first device to render the environmental effect; and on condition that the current proximity was determined to be within the first threshold proximity, control the second device, via a wireless interface, to render the supplementary effect.

Inventors:
ENGELEN DIRK VALENTINUS RENÉ (NL)
VAN DE SLUIS BARTEL MARINUS (NL)
MEERBEEK BERENT WILLEM (NL)
Application Number:
PCT/EP2017/053782
Publication Date:
August 31, 2017
Filing Date:
February 20, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
PHILIPS LIGHTING HOLDING BV (NL)
International Classes:
H05B37/02; A63J17/00
Domestic Patent References:
WO2009083865A12009-07-09
WO2015025235A12015-02-26
Foreign References:
US20150256954A12015-09-10
US7687744B22010-03-30
EP2498211A12012-09-12
US20090271003A12009-10-29
US7687744B22010-03-30
US8279709B22012-10-02
Attorney, Agent or Firm:
VERWEIJ, Petronella, Danielle et al. (NL)
Download PDF:
Claims:
CLAIMS:

1. A first device for rendering an environmental effect in at least a region of an environment, the first device comprising:

a wireless interface;

a transducer for rendering said environmental effect; and

a controller configured to:

determine a supplementary environmental effect, corresponding to the environmental effect, wherein said supplementary environmental effect is a lighting effect and said environmental effect is an audio effect, or said supplementary environmental effect is an audio effect and said environmental effect is a lighting effect;

determine at least one second device, wherein at least one of the first and second devices is a portable device such that a proximity of the first and second devices varies over time;

detect at least a current proximity of said at least one second device to said region of the environment;

evaluate whether said current proximity of said at least one second device to said region of the environment is within a first threshold proximity;

control the transducer to render said environmental effect; and determine a capability of said at least one second device;

on condition that said current proximity of said at least one second device to said region of the environment is determined to be within the first threshold proximity based on said evaluation, and on further condition that said at least one second device is capable of rendering said supplementary environmental effect, control the at least one second device, via the wireless interface, to render said supplementary environmental effect, causing the first and the second device to act together to contribute to a single sensory effect within the environment.

2. The first device of claim 1, wherein the controller is further configured to, following said controlling the at least one second device, via the wireless interface, to render said supplementary environmental effect, to: determine at least one third device, wherein at least one of the first and third devices is a portable device such that a proximity of the first and third devices varies over time;

detect at least a current proximity of said at least one third device to said region of the environment;

evaluate whether said current proximity of said at least one third device to said region of the environment is within a second threshold proximity;

control the transducer to render said environmental effect; and on condition that said current proximity of said at least one third device to said region of the environment is determined to be within the second threshold proximity based on said evaluation, control the at least one third device, via the wireless interface, to render said supplementary environmental effect.

3. The first device of any preceding claim, wherein the environmental effect and the supplementary environmental effect are temporally synchronized.

4. The first device of any preceding claim, wherein said determining at least one second device is performed at least periodically. 5. The first device of any preceding claim, wherein said determining at least one second device is performed at least responsive to user input received via the wireless interface.

6. The first device of any preceding claim, wherein said determining at least one second device is performed at least responsive to the controller detecting that at least one of the first device and the second device has moved.

7. The first device of claim 6, wherein said first device is a first portable device and said second device is a fixed device, and wherein said determining at least one second device is performed responsive to the controller detecting that the first portable device has moved.

8. The first device of claim 7, wherein the first portable device further comprises an inertial sensor and said detecting that the first portable device has moved is based at least on input from the inertial sensor. 9. The first device of claim 7 or 8, wherein said detecting that the first portable device has moved is based at least on input from a location network.

10. The first device of any preceding claim, wherein said current proximity of said at least one second device to said region of the environment is an estimated current proximity of said at least one second device to said region of the environment, and wherein the detection of the estimated current proximity is performed at least by detecting a current proximity of said at least one second device to the first device. 11. The first device of any of claims 1 to 9, wherein said supplementary environmental effect is a lighting effect and said environmental effect is an olfactory effect, or said supplementary environmental effect is an olfactory effect and said environmental effect is a lighting effect. 12. A method of rendering an environmental effect in at least a region of an environment by at least a first device, the method comprising steps of:

determining a supplementary environmental effect, corresponding to the environmental effect, wherein said supplementary environmental effect is a lighting effect and said environmental effect is an audio effect, or said supplementary environmental effect is an audio effect and said environmental effect is a lighting effect;

determining at least one second device, wherein at least one of the first and second devices is a portable device such that a proximity of the first and second devices varies over time;

detecting at least a current proximity of said at least one second device to said region of the environment;

evaluating whether said current proximity of said at least one second device to said region of the environment is within a first threshold proximity;

rendering said environmental effect; determine a capability of said at least one second device; and on condition that said current proximity of said at least one second device to said region of the environment is determined to be within the first threshold proximity based on said evaluation, and on further condition that said at least one second device is capable of rendering said supplementary environmental effect, control the at least one second device to render said supplementary environmental effect, causing the first and the second device to act together to contribute to a single sensory effect within the environment.

13. A computer program product for rendering an environmental effect in at least a region of an environment by at least a first device, the computer program product comprising code embodied on a computer-readable storage medium, wherein the code is configured to as when run on one or more processing units to perform operations of:

determining a supplementary environmental effect, corresponding to the environmental effect, wherein said supplementary environmental effect is a lighting effect and said environmental effect is an audio effect, or said supplementary environmental effect is an audio effect and said environmental effect is a lighting effect;

determining at least one second device, wherein at least one of the first and second devices is a portable device such that a proximity of the first and second devices varies over time;

detecting at least a current proximity of said at least one second device to said region of the environment;

evaluating whether said current proximity of said at least one second device to said region of the environment is within a first threshold proximity;

rendering said environmental effect;

determine a capability of said at least one second device; and

on condition that said current proximity of said at least one second device to said region of the environment is determined to be within the first threshold proximity based on said evaluation, and on further condition that said at least one second device is capable of rendering said supplementary environmental effect, control the at least one second device to render said supplementary environmental effect, causing the first and the second device to act together to contribute to a single sensory effect within the environment.

Description:
Paired Devices

TECHNICAL FIELD

The present disclosure relates to controlling portable devices such as lighting devices in the presence of one or more other devices such as audio devices. BACKGROUND

Electronic devices are becoming ever more connected. A "connected" device refers to a device - such as a user terminal, or home or office appliance or the like - that is connected to one or more other such devices via a wireless or wired connection in order allow more possibilities for control of the device. For instance, the device in question is often connected to the one or more other devices as part of a wired or wireless network, such as a Wi-Fi, ZigBee or Bluetooth network. The connection may for example allow control of the device from one of the one or more other devices, e.g. from an app (application) running on a user terminal such as a smart phone, tablet or laptop; and/or may allow for sharing of sensor information or other data between the devices in order to provide more intelligent and/or distributed automated control.

US7687744B2 discloses a single apparatus that provides light, sound and fragrance in a coordinated manner.

WO2015/025235 Al discloses a lighting system which can be controlled using a wireless device only when said wireless device is within a predetermined spatial region.

In recent years, the number of connected devices has increased dramatically.

Lighting and audio systems are part of this movement towards a connected infrastructure.

Conventional connected lighting systems consist of fixed light sources, which can be controlled through wall-mounted switches, dimmers or more advanced control panels that have pre-programmed settings and effects, or even from an app running on a user terminal such as a smart phone, tablet or laptop. For example, this may allow user to create an ambiance using a wide range of colored lighting, dimming options and/or dynamic effects. In the home environment, at least one existing system also offers consumers the possibility to implement such a connected lighting solution through retrofit bulbs that can fit into traditional light fittings. Similarly, conventional connected audio systems consist of fixed audio devices (i.e. speakers). These may be connected to one or more amps and control devices which together provide input signals to the speakers and thus allow the audio system to render sound in an environment.

A portable light source is a lighting device that can provide its illumination function without a wired power supply, typically being battery powered by means of a battery on board the lighting device, or potentially instead being powered by another type of on-board power supply such as a manual dynamo, or even being powered using a wireless power transfer (WPT) technique based on radiative electromagnetic induction. A portable light source can thus be taken by the user from one location to the next. Audio devices may also be portable.

SUMMARY

Lighting and audio devices within an environment act to render lighting and audio effects, respectively, within the environment. Hence, it would be desirable that these devices work together in order to render a coherent effect. For example, if a plurality of luminaires are rendering a "waterfall" effect, an additional luminaire introduced into the environment could alter its light output to contribute to this effect (e.g. by turning blue). Similarly, a speaker present within the environment would desirably output a corresponding "waterfall" sound effect to enhance the visual waterfall effect.

In a connected system, the devices can be logically recognized on a network, but their relative locations might be unknown. For example, on a network level, it is possible that a lighting device and an audio device located in different rooms can be paired. For the user, this would look strange when they are playing connected content (e.g. a fireplace and its related soundscape).

The present disclosure recognizes that the interactions between the lighting and/audio devices when contributing to rendering a coherent environmental effect depend on the relative positions of the devices. In modern systems both lighting and audio devices can be portable and can thus easily be moved relative to one another. This means that it would be desirable to provide a system which can dynamically update its behavior as the devices are moved (e.g. by a user) relative to each other in order to continue to render a coherent effect within the environment.

To perceptually connect a light effect with an audio effect, it is desirable that audio and light are rendered at approximately the same location (spatial connection), and preferably also synchronized in some way (temporal connection). So there is a need to link and unlink portable connected audio and lighting devices

To address these issues, the present invention provides for conditional pairing between audio and lighting devices wherein at least one of the devices is portable: when they are in each other's proximity, they are paired otherwise they are not. Paired devices provide 'connected content' (e.g. an audio device paired to a lamp rendering a fireplace light effect will play fireplace audio, or a lamp that is paired to an audio device rendering sounds of the sea will render a related lighting effect). In other words, devices which are pair act together to contribute to a single sensory effect within the environment, e.g. a user may need only to provide control commands (e.g. via a user terminal) to one of the paired devices and the other paired device would act automatically to contribute to the effect. For example a user might control a lighting device to render a fireplace effect and a paired audio device could then automatically (i.e. without further user input) render a fireplace sound effect.

In a more complex embodiment, multiple audio devices and lighting devices are acting as a single group. Individual lighting devices can be paired with the nearest audio devices, but it is also possible that the location of the lighting device determines a light effect location in the group of audio devices, and the multi-channel audio is rendered on multiple speakers, taking the light effect location into account. In this way, an audio effect (or its "sweet spot") can be created which appears to be positioned at a location between the multiple speakers in such a way that it is substantially co-located with the location of the rendered lighting effect.

It is also possible that a lighting system is distributed (e.g. a pixel controlled light strip), so it can produce effects at different locations. Lighting content can also be attributed with main effect locations and these locations can be used to amplify the different audio streams that are associated to the effect.

Furthermore, the above considerations are not limited to the pairing of lighting devices with audio devices. Alternatively or additionally, it may be desirable to coordinate the operation of a lighting device with one or more other transducer devices such as a scent dispenser (again where at least one of the devices is portable).

According to aspect disclosed herein, there is provided a first device for rendering an environmental effect in at least a region of an environment, the device comprising: a wireless interface; a transducer for rendering said environmental effect; and a controller configured to: determine a supplementary environmental effect, wherein said supplementary environmental effect is a lighting effect and said environmental effect is a sensory effect other than a lighting effect, or said supplementary environmental effect is a sensory effect other than a lighting effect and said environmental effect is a lighting effect; determine at least one second device, wherein at least one of the first and second devices is a portable device such that a proximity of the first and second devices varies over time; detect at least a current proximity of said at least one second device to said region of the environment; evaluate whether said current proximity of said at least one second device to said region of the environment is within a first threshold proximity; control the transducer to render said environmental effect; and on condition that said current proximity of said at least one second device to said region of the environment is determined to be within the first threshold proximity based on said evaluation, control the at least one second device, via the wireless interface, to render said supplementary environmental effect.

In embodiments, the controller is further configured to, following said controlling the at least one second device, via the wireless interface, to render said supplementary environmental effect, to: determine at least one third device, wherein at least one of the first and third devices is a portable device such that a proximity of the first and third devices varies over time; detect at least a current proximity of said at least one third device to said region of the environment; evaluate whether said current proximity of said at least one third device to said region of the environment is within a second threshold proximity; control the transducer to render said environmental effect; and on condition that said current proximity of said at least one third device to said region of the environment is determined to be within the second threshold proximity based on said evaluation, control the at least one third device, via the wireless interface, to render said supplementary

environmental effect.

In embodiments, the environmental effect and the supplementary environmental effect are temporally synchronized.

In embodiments, said determining at least one second device is performed at least periodically.

In embodiments, said determining at least one second device is performed at least responsive to user input received via the wireless interface.

In embodiments, said determining at least one second device is performed at least responsive to the controller detecting that at least one of the first device and the second device has moved. In embodiments, said first device is a first portable device and said second device is a fixed device, and wherein said determining at least one second device is performed responsive to the controller detecting that the first portable device has moved.

In embodiments, the first portable device further comprises an inertial sensor and said detecting that the first portable device has moved is based at least on input from the inertial sensor.

In embodiments, said detecting that the first portable device has moved is based at least on input from a location network.

In embodiments, the controller is further configured to determine a capability of said at least one second device and wherein said controlling the at least one second device, via the wireless interface, to render said supplementary environmental effect is performed on further condition that said at least one second device is capable of rendering said

supplementary environmental effect.

In embodiments, the first device is located within said region of the environment, or at least close enough that the positions of the device and the region substantially coincide as viewed by a user, e.g. both the device and the effect region may be in the same corner of the environment or otherwise in the same direction relative to the user. In such embodiments, said current proximity of said at least one second device to said region of the environment may be an estimated current proximity of said at least one second device to said region of the environment, and wherein the detection of the estimated current proximity is performed at least by detecting a current proximity of said at least one second device to the first device.

In embodiments, it may still be sufficient to estimate the proximity of the region to the second based on the proximity of the second device to the first device even when the first device is not located within said region of the environment. I.e. the first device may be located outside said region of the environment and the current proximity of said at least one second device to said region of the environment is estimated by determining a current proximity of said at least one second device to the first device.

In embodiments, said determining the at least one second device comprises determining at least one second device within the environment.

In embodiments, said supplementary environmental effect is a lighting effect and said environmental effect is an audio effect, or said supplementary environmental effect is an audio effect and said environmental effect is a lighting effect. In embodiments, said supplementary environmental effect is a lighting effect and said environmental effect is an olfactory effect, or said supplementary environmental effect is an olfactory effect and said environmental effect is a lighting effect.

According to another aspect disclosed herein, there is provided a method of rendering an environmental effect in at least a region of an environment by at least a first device, the method comprising steps of: determining a supplementary environmental effect, wherein said supplementary environmental effect is a lighting effect and said environmental effect is a sensory effect other than a lighting effect, or said supplementary environmental effect is a sensory effect other than a lighting effect and said environmental effect is a lighting effect; determining at least one second device, wherein at least one of the first and second devices is a portable device such that a proximity of the first and second devices varies over time; detecting at least a current proximity of said at least one second device to said region of the environment; evaluating whether said current proximity of said at least one second device to said region of the environment is within a first threshold proximity;

rendering said environmental effect; and on condition that said current proximity of said at least one second device to said region of the environment is determined to be within the first threshold proximity based on said evaluation, control the at least one second device to render said supplementary environmental effect.

According to another aspect disclosed herein, there is provided a computer program product for rendering an environmental effect in at least a region of an environment by at least a first device, the computer program product comprising code embodied on a computer-readable storage medium, wherein the code is configured to as when run on one or more processing units to perform operations of: determining a supplementary environmental effect, wherein said supplementary environmental effect is a lighting effect and said environmental effect is a sensory effect other than a lighting effect, or said supplementary environmental effect is a sensory effect other than a lighting effect and said environmental effect is a lighting effect; determining at least one second device, wherein at least one of the first and second devices is a portable device such that a proximity of the first and second devices varies over time; detecting at least a current proximity of said at least one second device to said region of the environment; evaluating whether said current proximity of said at least one second device to said region of the environment is within a first threshold proximity; rendering said environmental effect; and on condition that said current proximity of said at least one second device to said region of the environment is determined to be within the first threshold proximity based on said evaluation, control the at least one second device to render said supplementary environmental effect.

BRIEF DESCRIPTION OF THE DRAWINGS

To assist understanding of the present disclosure and to show how embodiments may be put into effect, reference is made by way of example to the

accompanying drawing in which:

Fig. 1 is a schematic diagram of a system including a plurality of fixed devices and a portable device,

Fig. 2 is a schematic diagram of another system with a plurality of fixed devices and a portable device,

Fig. 3 is a schematic diagram of a portable device,

Fig. 4 shows a floorplan of a room in which a system according to the present invention is disposed,

Fig. 5 illustrates a scenario in which a portable lighting device is paired with a fixed audio device,

Fig. 6 illustrates a scenario in which portable lighting device is paired with two fixed audio devices,

Fig. 7 illustrates a scenario in which a portable lighting device is paired with a fixed audio device,

Fig. 8 is a flowchart showing a method according to embodiments of the present invention,

Fig. 9a shows a floorplan of a room illustrating a scenario in which multiple devices are pair together in accordance with embodiments of the present invention,

Fig. 9b is a graph showing the audio amplification factors for each of the audio devices of Fig. 9a as a function of time.

DETAILED DESCRIPTION OF EMBODIMENTS

Portable audio and portable lighting devices are becoming popular. Advanced lighting devices are able to render nature phenomena such as fire, sea, forest, starry sky or moving clouds. Those effects call for an auditory component to strengthen the overall effect. The inventors envisage that one or more portable light sources and/or speakers could be incorporated into a connected lighting and sound system along with one or more conventional, fixed light sources and/or speakers. However, current systems assume fixed locations of the devices in the system.

For example, the current schemes for creating and recalling lighting scenes assume a fixed location of the light sources that are part of the scene. With portable lights sources on the other hand, not all of the sources in the system will bound to one location, but rather can be taken by the user to different locations to suit diverse lighting needs. Thus with the introduction of a portable light sources the creation and recalling of static and dynamic lighting scenes will need to be redefined. As some of the light points will no longer be fixed in space, then any previously created, preprogramed or otherwise predefined static or dynamic scene might create a different ambiance depending on the current locations of the portable light source(s) (whether light sources that were originally part of the scene and have now moved, or light sources that were not part of the original scene but have now been moved into the scene). Thus the interaction paradigm for interacting with such systems is changed: where traditional lighting behavior is static and confined to a predetermined set of lamps, these assumptions no longer hold. It would be desirable to provide a more user friendly control of portable lighting devices. Similar reasoning holds with respect to portable audio devices.

The following discloses a technique whereby audio and lighting effects are played out synchronously when the audio and lighting devices are in their neighborhood. For instance, a portable lighting device is rendering a (dynamic) light scene. It establishes a connection with a nearby connected loudspeaker such that a sound is rendered which matches and/or strengthens the rendered lighting effect. A concept of pairing an audio speaker and lighting device is explained. This concept is generalized to a system with multiple audio and lighting devices. It is explained how audio and lighting can be played out synchronously.

There is a trend that lighting and audio products become connected and also portable. For example, in a garden and/or home setting, devices can be placed at certain locations to create a certain atmosphere and a few days later, they can be placed somewhere else to support functionality. An example of such a use case is a Hue Go portable light, which might be used to support an AmbiLight function when placed next to the TV, or it can contribute to a cozy fireplace atmosphere when placed in the garden.

Together with light, audio amplifies atmospheres with soundscapes. For instance, advanced lighting devices are able to render nature phenomena such as fire, sea, forest, starry sky or moving clouds. Those effects call for an auditory component to strengthen the overall effect. In other use cases, when audio is leading (e.g. when the user selected a playlist), the light can contribute to the auditory atmosphere by playing out light effects synchronized with the audio.

The present invention proposes the pairing of the portable device when it is in the proximity of another device (fixed or portable). When paired, audio and lighting content is played out synchronously ("connected content"). When they are moved outside their proximity, the pairing stops.

In some embodiments, a single audio device and a single lighting device can be paired. For instance, the portable lighting device renders a (dynamic) light scene and establishes a connection with a nearby connected loudspeaker such that a sound is rendered which matches and/or strengthens the rendered lighting effect. In a paired situation, both devices can support each other by playing "connected content". The devices have means to detect that they are physically located nearby each other. The pairing is broken when they are placed apart, and is restored when they are placed nearby each other.

In other embodiments, multiple audio devices and lighting devices can act as a single group. Individual lighting devices can be paired with the nearest audio devices, but it is also possible that the location of the lighting device determines a light effect location in the group of audio devices, and the multi-channel audio is rendered on multiple speakers, taking the light effect location into account. In this way, an audio effect (or its "sweet spot") can be created which appears to be positioned at a location between the multiple speakers in such a way that it is substantially co-located with the location of the rendered lighting effect.

It is also possible that a lighting system is distributed (e.g. a pixel controlled light strip), so it can produce effects at different locations. Lighting content can also be attributed with main effect locations and these locations can be used to amplify the different audio streams that are associated to the effect.

Figures 1 and 2 illustrate examples of a system with a plurality of fixed lighting devices 4 and a plurality of fixed audio devices 5 (generally referred to as fixed devices) within an environment 2. It is appreciated that while three fixed lighting devices 4 and two fixed audio devices 5 are illustrated in figure 1, any number or combination of fixed lighting/audio devices may be present. The system also includes at least one portable device 18 which may be either a portable lighting device or a portable audio device.

A user input device 6 is able to control the fixed devices and the at least one portable device 18 by at least sending control signals to the devices, and in embodiments also receiving signals such as acknowledgments and/or status reports back from the devices. There are various possibilities for implementing these communications, either directly or indirectly, as known to persons skilled in the art. Figures 1 and 2 illustrate two different examples.

Figure 1 shows an example in which the communication is implemented via one or more direct connections 22 between the user interface device 6 and the one or more portable devices 18, and/or one or more direct connection 20 between the one or more portable devices 18 and the one or more fixed lighting devices 4 and audio devices 5. Direct here means without the involvement of a lighting bridge 12 or other such intermediate control device (as illustrated in relation to Figure 2). The connections 20, 22 could take the same form as one another, or a different form; and either or both may take the same form as the connection between the user interface device 6 and the fixed device(s), or a different form. Therefore these connections 20, 22 may take any suitable form known in the art, for instance using a short-range RF technology such as Wi-Fi (e.g. via a Wi-Fi router), or ZigBee or Bluetooth (e.g. not involving a separate router). Another possibility for the connection between fixed and portable light sources 4, 18 is coded light (data embedded in their emitted illumination by modulating the illumination at a frequency substantially beyond human perception). The only restriction to implement the embodiments discussed below is that the connection 20 between the portable device(s) 18 and fixed device(s) 4 will be wireless.

Preferably, the connection between the user interface device 6 and portable device(s) 18 is also wireless, e.g. Wi-Fi, ZigBee or Bluetooth, though it is not excluded that the control could instead be effected via a temporary wired connection with a docking station or the like. Note also that in embodiments, the user interface device 6 could be a separate device from the portable device (i.e. in separate housing), e.g. a wall-panel or mobile user terminal such as a smart phone, tablet or laptop with app; or alternatively the user interface device 6 could even be incorporated into the portable device 18 (same housing). The following will be described in terms of a separate device, but it will be appreciated that this is not necessarily limiting.

Figure 2 shows another example, involving a bridge 12 such as a lighting bridge. Here, to effect the various control functions discussed herein, the communication between the user interface device 6 and the one or more portable devices 18 may be implemented via the first connection 14 between the user interface device 6 and the bridge 12, and one or more third connections 24 between the bridge and portable devices 18; and/or the communication between the one or more portable devices 18 and the one or more fixed devices may be implemented via the one or more third connections between the one or more portable devices 18 and the bridge 12, and the one or more second connections 16 between the bridge 12 and the one or more fixed devices. Again any of these connections 14, 16, 24 may take the same or a different from form any of the others, and may be implemented using any of the wired or wireless means discussed above, with the only restriction being that (in this embodiment) the third connection 24 between portable device(s) 18 and bridge 12 will be wireless.

Generally, any of the features discussed herein involving communication between any combination of the user interface device 6, the one or more fixed devices 4, and the one or more portable devices 18 may occur via any of the means discussed above or others, or any combination of such means, and it will be appreciated that the particular means of communication between these components is not an essential factor in the disclosure.

Figure 3 shows a schematic diagram of a portable device 18 according to the present invention. The portable device comprises a wireless interface 26, a transducer 30, and a controller 28 operatively coupled to both the wireless interface 26 and the transducer 30.

The wireless interface 26 is configured to send and receive signals to and from a communication network, e.g. as described in the examples of figure 1 and figure 2. That is, the wireless interface 26 allows the controller 28 of the portable device 18 to send and receive data to and from the wireless network. Hence, all operations pertaining to the communication of data and/or beaconing signals which are herein attributed to the controller 28 are understood to refer to data and/or beaconing signals communicated between the wireless network and the controller 28 via the wireless interface 26.

As used herein, the term "communications network" is understood to refer to any communications system which allows for data to be transferred between at least any two devices belonging to the communications network. This transfer may be direct or indirect, i.e. via another one of the devices in the communications network. Hence, a communications networks in accordance with embodiments disclosed herein may comprise two or more devices, i.e. at least one peer-to-peer connection.

The functionality of the wireless interface 26 may be regarded as two functionalities: data communication, and localization. Hence, the wireless interface 26 may in embodiments comprise two or more separate wireless interface modules which may employ different signaling methods and/or standards. For example, the wireless interface 26 may comprise a first module for data communication which uses the ZigBee standard, and a second module for localization which uses the WiFi standard. Other possibilities are known to persons skilled in the art and therefore not mentioned here. In general, the wireless interface 26 comprises at least a transmitter for the purposes of data communication and either a transmitter or receiver for the purposes of localization, though it is not excluded that both functionalities are provided by transceivers - either a single transceiver providing both functionalities, or a separate communications transceiver and localization transceiver.

The portable device 18 may be either a portable lighting device or a portable audio device. Accordingly, the transducer 30 may be an electroluminescent or an

electroacoustic transducer. Hence, all operations pertaining to the rendering of an

environmental effect which are herein attributed to the controller 28 are understood to refer to environmental effects rendered by the transducer 30.

The controller 28 may be implemented as software running on a processor, as hardware present at the portable device 18, or as a combination of software and hardware.

Figure 4 shows an example environment 2 in which the present invention might be implemented. The environment 2 is a room in which a plurality of audio devices 5 a- f and a portable lighting device 18 are provided. In this example the audio devices 5a-f are part of a surround sound system associated with a television 7 which comprises a center audio device 5b, left audio device 5a, right audio device 5c, right surround audio device 5d, left surround audio device 5e, and subwoofer 5f. In this example, the portable device 18 is illustrated as a portable lighting device and the audio devices 5a-f are illustrated as fixed devices, but it is appreciated that this is merely exemplary and that in general the portable device 18 and fixed devices may be any combination of different types of devices capable of rendering an environmental effect. For example, a portable audio device and a plurality of fixed lighting devices. Further, the pairing of a device during proximity might also be done for other types of devices and content (E.g. pairing lighting devices with a TV (video content), a fragrance dispenser, a smartphone or tablet (handling audio and/or video content)).

It is understood that the portable lighting device 18 may be moved around the room during operation, e.g. by a user. The portable device may be able to detect this motion based on a change in its location, or by input from an inertial sensor (e.g. an accelerometer, a magnetometer, etc.). Therefore, the portable lighting device 18 may be closer to one (fixed) audio device at one point in time and then later closer to another, different, audio device.

The controller 28 of the portable lighting device 18 may be configured to determine one of the audio devices which is within a threshold proximity to the portable lighting device 18 (e.g. directly, by signaling between the portable device and the audio devices via the wireless interface 26; or indirectly, by firstly determining the locations of both the portable lighting device 18 and the one of the audio devices). This allows the portable lighting device 18 to "pair" with this audio device. Paired devices act together to render a supplementary environmental effect along with a "main" environmental effect. Pairing is dynamic in that the pairings within a system are updated dynamically; a pairing can be broken when the devices are determined to no longer be within the threshold proximity to each other. In this case, the devices "un-pair". The devices will pair with a new device if one comes within the threshold proximity. Note that a device can pair with more than one other device at a time (see embodiments described later).

For example, a first portable device paired with a second (fixed device) may be moved by a user to outside the threshold proximity to the second device and to a new location within a threshold proximity to a third device. In this case the first portable device would unpair from the second device when it determines that it is now outside the proximity threshold to the second device, and form a new pairing with the third device when it determines that it is now within the proximity threshold to the third device.

In accordance with embodiments of the present invention, devices may pair with each other when they are within a threshold proximity. For illustration purposes only, figure 4 shows how these threshold proximities might function by illustrating respective threshold proximities 51a-f for each audio device 5a- f. Note that the threshold proximity for each audio device is shown as equal, but this is not necessarily the case. For example, the threshold proximity 51 f of the subwoofer 5f might be substantially larger than the threshold proximity 5 lb for the center audio device 5b.

As illustrated in figure 4, the portable lighting device 18 is not within the threshold proximity of any of the audio devices 5a-f and hence is not paired with any of them. If the portable lighting device 18 is moved (e.g. by a user) to a new location as shown in figure 5, it is now within the threshold proximity 51a of audio device 5a. Thus, the portable lighting device 18 pairs with left audio device 5 a.

Similarly, if the portable lighting device 18 is moved to another location, e.g. as shown in figure 6, it is now within the threshold proximity 51 e of the left surround audio device 5e and also within the threshold proximity 5 If of the subwoofer 5f. In this case, the portable lighting device 18 pairs with both left surround audio device 5e and subwoofer 5f.

In general, the location of the effect rendered by a device and the location of the device itself may not be equivalent. For example, a spotlight may render a lighting effect on the other side of the room to the location of the spotlight itself. Because of this, a device rendering an effect at least in a region of an environment preferably pairs with a second device when the second device and the region substantially coincide (e.g. are within a threshold proximity). Above localization examples relate mainly to determination of the location of a device, but may be extended to determining the location of the effect region. For example, the location of a lighting device will allow the location of the effect region to be determined (or at least estimated) when combined with orientation information and preferably information concerning the layout of the environment. Another example is the use of coded light identifiers and a camera. A camera can detect ID values embedded into the light output of a luminaire which allows the location of a specific luminaire's lighting effect to be determined as it appears in the captured image of the environment.

Figure 7 shows an example of when the portable lighting device 18 may pair with an audio device 5 despite the portable light device 18 itself being further away from the audio device 5 than the threshold. In this case, portable lighting device 18 is rendering lighting effect 19. The lighting effect 19 is within the proximity threshold 5 Id of audio device 5d, hence the device rendering lighting effect 19 (i.e. portable lighting device 18) pairs with audio device 5d.

Note that the requirement that the region in which the effect is rendered being within the proximity threshold is, in general, only preferable. In embodiments it is sufficient to pair devices when the devices themselves are within a proximity threshold of one another. In those cases, the location of the effect region and the location of the device rendering that effect may be regarded as the same. For example, the portable lighting device 18 may be regarded as being located within its effect region which allows the proximity of the audio device 5 to the effect region to be estimated by determining the proximity of the devices themselves.

Once paired, both devices can be configured to act as a pair, as described in more detail below. In order to facilitate this pairing when devices are in the neighborhood of one or more other devices at least one of the devices needs to detect the proximity of the other. There are several options regarding the proximity detection. Several examples are given below, but others are known to persons skilled in the art.

In a first example, both devices may be part of a location detection system. For example an indoor positioning network comprising a plurality of positioning nodes or beacons which each emit beaconing signals for the purposes of location determination.

Devices then use tags or they estimate their positions relative to some beacons.

In a device-centric approach the device takes measurements of properties of the beaconing signals from at least one beacon, e.g. ToF, RSSI, AoA, or some combination, as received by the wireless interface. This allows the controller of the device to determine the location of the device relative to the beacons using e.g. trilateration, triangulation, multilateration.

In a network-centric approach the device emits a positioning signal via the wireless interface which is received by at least one positioning node. Measurements of properties of the positioning signal, e.g. ToF, RSSI, AoA, or some combination as received by the at least one positioning node are then used to calculate the position of the device relative to the positioning nodes, either by one of the positioning nodes or by a central positioning server.

Hybrid methods are also known in the art which involve a combination of the device-centric and network-centric approaches.

Once the devices have determined their locations within the location detection system, they can then communicate on their locations which allows a proximity (e.g.

distance) between devices to be calculated. When devices are within a threshold proximity, they can be paired.

In a second example, the devices may communicate localization signals directly between one another, in either or both directions, via their wireless interfaces.

Properties of these localization signals can then be used directly at least one of the device to determine a proximity of the devices, for example using RSSI of ToF. Alternatively, one or both of the devices may only take measurements of the localization signals, and submit the data "raw" to a central positioning server which calculates a proximity of the devices and returns the result to at least one of the devices. If multiple device are detected in the proximity, RSSI may be used to detect the nearest device.

In a third example, the devices may be equipped with an audio based position estimation system (e.g. as described in US8279709). This makes it possible to detect the relative positions of devices, which is sufficient to detect the proximity of devices. For example, if a device has an integrated microphone array it can determine the relative location of a loudspeaker. If the device is a lighting device then the lighting device may adjust the (direction of the) lighting effect toward the determined relative location of the loudspeaker.

In a fourth example, audio devices in the system may include audio watermarks or audio patterns in their respective output audio effects. A device equipped with microphones or vibration sensors is then able to look for and detect these watermarks and patterns. By having the device informed of the watermark or each audio device (e.g. provided by each controller of the audio devices, or by a central audio controller, and preferably stored locally at the device), the device is able to separate the received audio signals and determine which audio devices are being "heard". When the measured pattern associated with a particular audio device is strong enough, it may be determined that that audio device is within the threshold proximity.

Portable devices may be battery driven, and hence have a limited battery life. Therefore to avoid that the portable devices continuously have to check their proximity to other device, one might only initiate proximity detection upon movement of one of the portable devices (e.g. with tilt sensor, accelerometer). In general, the proximity detection can be performed at any suitable time. For example, it could be performed periodically at configurable time intervals which may be regular (e.g. once every thirty minutes) or irregular, upon receiving a new control command, responsive to the portable device being moved (e.g. as indicated by a g-sensor of the portable device, or as indicated by updated location data indicating the portable device's location has changed), or any combination of these.

When paired devices do not detect each other, the pairing is ended. When proximity is detected, the pairing is started. In the paired state, "connected content" can be rendered on the devices. Connected content contains an audio part and a lighting (or video) related part. The audio and lighting device parts can be played out in a synchronized way. There are a number of possibilities for achieving this synchronization. For example, distribution of content and synchronization can be done via the communication network, and the connected content can be mixed with the content that the device is already rendering. As another example, a more robust synchronization can be done via the lighting device that uses a microphone to detect the audio and synchronizes the light rendering with the detected audio patterns. A microphone array with multiple microphones and echo cancellation algorithms may be used to filter out irrelevant environmental sounds. It is also possible that one device receives the connected content and forwards one part to the paired device.

Figure 8 shows a pairing method performed by the controller 28 of one of the devices in the network (e.g. portable device 18). The method may be implemented at any suitable time. For example, at a configurable regular interval (e.g. once every thirty minutes), upon receiving a new control command, responsive to the portable device being moved (e.g. as indicated by a g-sensor of the portable device, or as indicated by updated location data indicating the portable device's location has changed), or any combination of these. The method of figure 8 begins at step SlOl assuming that a command has been received to render a particular effect (e.g. a fireplace lighting effect) at portable device 18, though it is appreciated that other indications may be used to initiate the method. For example the portable device 18 may already be rendering a fireplace effect when it detects that it has been moved. In this case, there may not be an explicit "command" to render the fireplace effect.

However the method begins, controller 28 then proceeds to determine, in step SI 02, if there is a corresponding supplementary effect associated with the "main" effect (e.g. is there a fireplace sound available which would enhance the fireplace lighting effect). This may involve accessing a database Dl storing a lookup table of sensory effects and the supplementary effects to which they are each associated (see example in table 1).

Table 1

Table 1 shows an example of a database allowing supplementary effect(s) to be determined. As can be seen, each effect is represented by its various sensory components. This allows the controller 28 to determine if there are any supplementary effects associated with the "main" effect it is rendering. If there is more than one supplementary effect, the controller 28 may begin again at step SI 02 with respect to the next supplementary effect. If there are no supplementary effects associated with the "main" effect, controller 28 proceeds to step SI 03 and renders (or continues rendering) the effect by the device 18.

Once controller 28 has determined a supplementary effect, it proceeds to step SI 04 in which the controller 28 determines all devices in the network. This information may be stored in a database D2, or may be retrieved by the controller directly from each of the devices. Note however that (particularly for large networks consisting of a large number of devices) steps SI 05- 109 as shown in figure 8 may be performed simultaneously with step SI 04, and in any suitable order. That is, the controller 28 may be configured to only find devices in the network which are in the environment 2, or only devices which are capable of rendering the supplementary effect. In other words, steps SI 04- 109 are not necessarily separate steps, but are illustrated as such for the sake of explanation. For example, if the controller 28 retrieves device information directly from the devices themselves via coded light, this will implicitly mean that these devices are in the environment, thus obviating step S105. In general, however, the controller 28 proceeds to step SI 05 and excludes devices which have been found on the network but which are located outside of the environment 2. Knowledge about the environment in which the devices are placed (e.g. room information) may similarly be retrieved from the devices themselves, or from a database. This is desirable because two devices might be very proximate, but in different rooms. In this case, it is not sensible to have e.g. the light effect in one room and the audio effect in another room.

Next, at step SI 06, the controller 28 determines the proximities of the devices, e.g. the respective distances between each of the devices and the portable device 18 (though it may be sufficient to determine only whether or not the devices are within the same spatial zone). This can be determined using localization methods described above as they provide the locations of each device and the portable device 18 individually, thus allowing the proximities to be calculated. As mentioned earlier, it may be preferable for the controller 28 to determine the proximity of the portable device 18 to the region of the effect rendered by the devices (or the other way round), using any known method. However, it may be sufficient to assume that the region of the rendered effect of each device is co-located with the device itself, in which case the proximity of each device to the portable device 18 may be determined directly. E.g. by signaling between the portable device 18 and each of the devices and using known methods such as ToF, RSSI.

Once the proximities of the devices have been determined, the controller 28 can at step SI 07 exclude any devices falling outside a proximity threshold. This leaves the devices within the environment which are acceptably close to the portable device 18. Here "acceptably close" may mean that the devices are close enough to the portable device 18 that their effects, when rendered and experienced by a user within the environment, appear to emanate from substantially the same location. E.g. a lighting device and an audio device within one meter of each other. Note that the threshold may vary depending on the types of devices being considered. For example, the threshold for two lighting devices may be smaller than the threshold for two audio devices, due to the fact that users within the environment are able to notice small discrepancies in two lighting effects' locations easier than two audio effects' locations.

Note that if there is more than one device within this threshold, there are two options. Either the controller 28 can select the nearest device (i.e. the device with the closest proximity value) or the controller 28 can proceed to render the supplementary effect using a combination of the more than one device (see embodiments described later pertaining to multiple devices).

Next, the controller proceeds to step SI 08 and retrieves rendering capabilities for each of the devices. This may be important for example because an audio device may want to render a colorful effect (e.g. a rainbow) on a nearby lighting device and hence needs to find a lighting device able to render this, e.g. a pixelated color lighting strip. Similarly, a lighting device may want to accompany a stroke of lightning effect with a sound of thunder, and therefore would rather select the subwoofer which is located somewhat further away than the compact mini loudspeaker which happens to be the nearest. Note that in this case it may be necessary to perform step SI 09 before step SI 08. Information regarding the rendering capabilities of the devices may be stored locally at each device, or stored centrally in a database D3.

After step SI 08, the controller 28 is left with a list of devices within the environment which are within the proximity threshold, along with their respective rendering capabilities. Thus, at step SI 09, the controller determines if there is a device which is capable of rendering the supplementary effect. If there is not, the method proceeds to step SI 03 and the controller 28 controls the transducer 30 of the portable lighting device 28 to render the "main" effect. If there is a device (or more than one device - see embodiments below) capable of rendering the supplementary effect, the method proceeds to step SI 10 and the controller 28 controls the transducer 30 of the portable lighting device 28 to render the "main" effect and controls the transducer of the other device to render the supplementary effect.

The following now describes a case with multiple audio and lighting devices. In figure 9a, a room with four audio devices (Ax) and four light strips (Lx) is represented. It is assumed that the lighting devices are already paired with the audio devices as a result of the method according to figure 8 being carried out by the controller 28 of light strip LI . It is also assumed that user input (or otherwise) to LI has caused the light strips, as a group, to render lighting content comprising a flame effect on the wall, which can be localized in the room. To enhance the flame effect, the audio devices are rendering a flame effect soundscape, where the intensity of the sound is synchronized with the intensity of the lighting effect.

The pairing between devices Al-4 and Ll-4 may be arrived at in several ways. The controller each of light strips LI, L2, L3 and L4 could identify audio devices Al and A2, A2 and A3, A3 and A4, A4 and Al, respectively at step S109 and pair with them. This means that the pairings then "daisy-chain" together such that the behaviors of all eight devices become synchronized. For example, both LI and L4 pair with audio device Al and thus an input command to LI to render an effect propagates through the devices to Al and L4.

Alternatively, all four audio devices A 1-4 and the other three light strips L2-4 may have been identified by the controller 28 as satisfying the conditions under step SI 09. This would be the case for a larger proximity threshold than the above "daisy-chaining" example.

Amplification factors are sent to the audio devices, such that the audio effect is distributed over two audio devices, and is synchronized with the location of the light effect. When the light effect is close to an audio device, the amplification is maximal. When the light effect is in between two devices, the amplification is distributed over the two devices. This is illustrated in figure 9b.

Pairing of the groups of devices means that they have a notion of location and proximity. This notion can be present at device level or at system level.

In case of device level, the lighting devices have their coordinate system and the notion of devices that are in the proximity. In case of LI, the lighting device knows that audio devices Al and A2 are at the end of the lighting device. It's also possible that an audio device Ax (not indicated in the figure) is placed in between Al and A2, so LI has to take these 3 devices into account. When rendering the fire effect at a certain location in LI, it calculates the amplification factors for Al and A2 (and Ax) and forwards these to the audio streaming control.

In case of a system level, the system is aware of the location of the effect. So it drives the audio devices with the amplification factors. The lighting devices get the controls to render the effect at that location. When portable audio devices are moved, the location in the system is updated to a new location. When the audio devices are moved outside the vicinity of the device-area, they are also removed from the rendering of the connected content.

As shown in the example of figures 9a and 9b, it is possible that an audio device is paired with multiple lighting devices. (Al is paired with LI and L4). The same is valid for the lighting devices (LI is paired with Al, A2 and optionally Ax).

When the rendering is done at device level, the light strip might be leading, by determining the location of the light effect and sensing the amplification factors to the audio devices. The audio devices have to render the audio content synchronously, while the lighting device then adapts the light effect to the audio signal. It will be appreciated that the above embodiments have been described only by way of example.

For instance, whilst the above has been discussed in terms of a controller 28 implemented in the portable device 18, in alternative embodiments the controller 28 could be implemented in one of the fixed devices, or in a central control device such as the bridge 12, in which cases the controller 28 controls the portable device 18 and fixed devices by sending signals to it via the respective wireless interface 26 of the portable or fixed devices (e.g. via a local RF technology such as Wi-Fi, ZigBee or Bluetooth).

Further, while some concepts disclosed herein may have been described in terms of a portable lighting device and fixed audio devices, in embodiments the teachings herein can also extend to detecting when a portable audio device is in the vicinity of more than one fixed lighting devices. Even more generally, the teachings herein can extend to detecting when a portable lighting device is in the vicinity of more than one fixed scent dispensers capable of rendering an olfactory effect within the environment, or to detecting when a portable olfactory device is in the vicinity of more than one fixed lighting devices.

Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. A single processor or other unit may fulfil the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. A computer program may be stored and/or distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems. Any reference signs in the claims should not be construed as limiting the scope.