Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DETECTING SIGNALS EMBEDDED IN VISIBLE LIGHT
Document Type and Number:
WIPO Patent Application WO/2018/015272
Kind Code:
A1
Abstract:
Visible illumination emitted by each of multiple contiguously-mounted lighting units is modulated to embed a respective signal, to be detected by an image processing module based on images captured by a camera. A luminous separator element is disposed along the boundary between the data-transmitting luminous areas of each respective pair of units. The luminous separator element adjoins the data-transmitting areas, and emits light with substantially the same color and intensity as the luminous data-transmitting areas of the respective pair of luminaires, thereby giving a consistent appearance to human viewers. However it also has a distinguishing property, visible to the camera, that distinguishes the light emitted by the separator element from the illumination emitted by the luminous data-transmitting areas. The image processing module can then distinguish between the different signals embedded in the illumination from the different lighting units based on the appearance of the separator elements.

Inventors:
DE BRUIJN FREDERIK JAN (NL)
NIJSSEN STEPHANUS JOSEPH JOHANNES (NL)
VAN VOORTHUISEN PAUL HENRICUS JOHANNES MARIA (NL)
DAVIES ROBERT JAMES (NL)
JANSSEN ONNO MARTIN (NL)
Application Number:
PCT/EP2017/067747
Publication Date:
January 25, 2018
Filing Date:
July 13, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
PHILIPS LIGHTING HOLDING BV (NL)
International Classes:
H04B10/116
Domestic Patent References:
WO2015000772A12015-01-08
Foreign References:
US20160154088A12016-06-02
US20150276399A12015-10-01
US20110105134A12011-05-05
EP2940890A12015-11-04
Other References:
DUCO SCHREUDER: "Outdoor Lighting: Physics, Vision and Perception", 2008, SPRINGER SCIENCE, ISBN: 978-1-4020-8601-4
Attorney, Agent or Firm:
TAKKEN, Robert, Martinus, Hendrikus et al. (NL)
Download PDF:
Claims:
CLAIMS:

1. A system comprising:

multiple contiguously-mounted lighting units (100) arranged to emit visible illumination to illuminate an environment (109);

transmission circuitry (104) arranged to modulate the visible illumination emitted by each of the multiple lighting units so as to embed a respective signal into the visible illumination emitted by each of the lighting units, wherein each of the multiple lighting units has a data-transmitting luminous area (107) from which the illumination is emitted, with the data-transmitting luminous areas of each adjacent pair (100a, 100b) of the multiple lighting units having a boundary (301) therebetween;

detecting equipment (110) comprising a camera (112) for capturing one or more images of the data-transmitting luminous areas of a plurality of said multiple lighting units, and an image processing module (114) configured to detect the respective signals based thereon; and

for each respective instance of said adjacent pairs of lighting units, a luminous separator element (501) disposed along some or all of the boundary between the data- transmitting luminous areas of the respective pair of lighting units, adjoining the data- transmitting areas of both of the respective pair of lighting units, the separator element being configured to emit visible light with substantially the same color and intensity as the luminous data-transmitting areas of the respective pair of lighting units, but having a distinguishing property visible to the camera that distinguishes the light emitted by the separator element from the illumination emitted by the luminous data-transmitting areas of the multiple lighting units;

wherein the image processing module of the detecting equipment is configured to detect said distinguishing property in the one or more images in order to distinguish the light emitted by the separator elements from the illumination emitted by the data transmitting luminous surfaces when the data-transmitting areas of more than one of said plurality of lighting units appear simultaneously in each of the one or more images, and based thereon to segment each of the one or more images into different spatial regions to distinguish between the different signals embedded in the illumination from said plurality of lighting units.

2. The system of claim 1, wherein said distinguishing property comprises a modulation in the visible light emitted by the separator elements with a different modulation spectrum than the modulation of the illumination from the multiple lighting units.

3. The system of claim 1, wherein the distinguishing property is that the light emitted by the separator elements is not modulated.

4. The system of claim 3, wherein each of the lighting units (100) is comprised by a respective luminaire comprising a plurality of LED groups, a first one or more of said

LED groups being arranged to emit said illumination and second one or more of the LED groups being arranged to act as said separator elements; and

within each luminaire, current from a same driver circuit is split between a branch driving the one or more first LED and a branch driving the one or more second LED groups, with the transmission circuitry (104) being arranged to inject the signal into only the branch driving the first LED group and not the second LED group.

5. The system of claim 1, wherein the distinguishing property comprises a component in an infrared or ultraviolet part of the electromagnetic spectrum.

6. The system of any preceding claim, wherein the distinguishing property is the same for all of the separator elements (501).

7. The system of any of claims 1, 2 or 5, wherein the distinguishing property differs for different ones or subgroups of the separator elements (501).

8. The system of claim 7, wherein the difference in said property between different ones or subgroups of the separator elements (501) indicates different directions, and wherein the image processing module (114) is configured to determine an orientation of the camera (1 12) with respect to the indicated directions based on the appearance of one or more of the separator elements in the captured images.

9. The system of claim 7 or 8, wherein the difference in said property between different ones or subgroups of the separator elements (501) indicates different subregions within the environment (109), and wherein the image processing module (114) is configured to determine a location of the camera 1 12 with respect to the subregions based on the appearance of one or more of the separator elements in the captured images. 10. The system of claim 1 , wherein for each of the adjacent pairs of lighting units

(100a, 100b): the system comprises two luminous separator elements (501a, 501b), one along an edge of each of the respective pair of lighting units; the light emitted by the two separator elements (501) is arranged to mix to produce a combined signal; and wherein the modulation of the light emitted by the separator elements is not synchronized so that the combined signal is different than if the two separator elements were emitting individually; the image processing module of the detecting equipment being configured to distinguish between the different signals based on the combined signal.

11. The system of any of claims 1 to 9, wherein for each of the adjacent pairs of lighting units (100a, 100b): the system comprises two luminous separator elements (501a,

501b), one along an edge of each of the respective pair of lighting units; and the modulation emitted by the two separator element is synchronized.

12. Lighting equipment comprising:

multiple contiguously-mounted lighting units (100) arranged to emit visible illumination to illuminate an environment (109);

transmission circuitry (104) arranged to modulate the visible illumination emitted by each of the multiple lighting units so as to embed a respective signal into the visible illumination emitted by each of the lighting units, wherein each of the multiple lighting units has a data-transmitting luminous area (107) from which the illumination is emitted, with the data-transmitting luminous areas of each adjacent pair (100a, 100b) of the multiple lighting units having a boundary (301) therebetween; and

for each respective instance of said adjacent pairs of lighting units, a luminous separator element (501) disposed along some or all of the boundary between the data- transmitting luminous areas of the respective pair of lighting units, adjoining the data- transmitting areas of both of the respective pair of lighting units, the separator element being configured to emit visible light with substantially the same color and intensity as the luminous data-transmitting areas of the respective pair of lighting units, but having a distinguishing property visible to the camera that distinguishes the light emitted by the separator element from the illumination emitted by the luminous data-transmitting areas of the multiple lighting units.

13. Detecting equipment (110) for detecting different respective signals embedded in visible illumination emitted by multiple contiguously-mounted lighting units (100) to illuminate an environment (109), wherein each of the multiple lighting units has a data- transmitting luminous area (107) from which the illumination is emitted, with the data- transmitting luminous areas of each adjacent pair (100a, 100b) of the contiguous lighting units having a boundary (301) therebetween; the detecting equipment comprising:

a camera (112) for capturing one or more images of the data-transmitting luminous areas of a plurality of said multiple lighting units; and

an image processing module (114) configured to detect the respective signals based thereon;

wherein the image processing module of the detecting equipment is configured to distinguish between the different signals embedded in the illumination from said plurality of lighting units based on an appearance in said one or more images of: for each respective adjacent pair of said plurality of lighting units, a luminous separator element (501) disposed along some or all of the boundary between the data-transmitting luminous areas of the respective pair of lighting units, adjoining the data-transmitting areas of both of the respective pair of lighting units, the separator element emitting visible light with substantially the same color and intensity as the luminous data-transmitting areas of the respective pair of lighting units, but having a distinguishing property visible to the camera that distinguishes the light emitted by the separator element from the illumination emitted by the luminous data- transmitting areas of the multiple lighting units;

the image processing module of the detecting equipment being configured to detect said distinguishing property in the one or more images in order to distinguish the light emitted by the separator elements from the illumination emitted by the data-transmitting luminous surfaces when the data-transmitting areas of more than one of said plurality of lighting units appear simultaneously in each of the one or more images, and based thereon to segment each of the one or more images into different spatial regions to distinguish between the different signals embedded in the illumination from said plurality of lighting units.

14. A method of emitting illumination comprising:

using multiple contiguously-mounted lighting units (100) to emit visible illumination to illuminate an environment (109);

modulating the visible illumination emitted by each of the multiple lighting units so as to embed a respective signal into the visible illumination emitted by each of the lighting units, wherein each of the multiple lighting units has a data-transmitting luminous area (107) from which the illumination is emitted, with the data-transmitting luminous areas of each adjacent pair (100a, 100b) of the multiple lighting units having a boundary (301) therebetween; and

for each respective instance of said adjacent pairs of lighting units, using a luminous separator element (501) disposed along some or all of the boundary between the data-transmitting luminous areas of the respective pair of lighting units, adjoining the data- transmitting areas of both of the respective pair of lighting units, to emit visible light with substantially the same color and intensity as the luminous data-transmitting areas of the respective pair of lighting units, but having a distinguishing property visible to the camera that distinguishes the light emitted by the separator element from the illumination emitted by the luminous data-transmitting areas of the multiple lighting units.

15. A method of detecting different respective signals embedded in visible illumination emitted by multiple contiguously-mounted lighting units (100) to illuminate an environment (109), wherein each of the multiple lighting units has a data-transmitting luminous area (107) from which the illumination is emitted, with the data-transmitting luminous areas of each adjacent pair (100a, 100b) of the contiguous lighting units having a boundary (301) therebetween; the method comprising:

using a camera (112) for capturing one or more images of the data- transmitting luminous areas of a plurality of said multiple lighting units; and

using an image processing module (114) configured to detect the respective signals based thereon;

wherein the detection comprises distinguishing between the different signals embedded in the illumination from said plurality of lighting units based on an appearance in said one or more images of: for each respective adjacent pair of said plurality of lighting units, a luminous separator element (501) disposed along some or all of the boundary between the data-transmitting luminous areas of the respective pair of lighting units, adjoining the data-transmitting areas of both of the respective pair of lighting units, the separator element emitting visible light with substantially the same color and intensity as the luminous data-transmitting areas of the respective pair of lighting units, but having a distinguishing property visible to the camera that distinguishes the light emitted by the separator element from the illumination emitted by the luminous data-transmitting areas of the multiple lighting units; and

wherein said distinguishing comprises detecting said distinguishing property in the one or more images in order to distinguish the light emitted by the separator elements from the illumination emitted by the data transmitting luminous surfaces when the data- transmitting areas of more than one of said plurality of lighting units appear simultaneously in each of the one or more images, and based thereon segmenting each of the one or more images into different spatial regions to distinguish between the different signals embedded in the illumination from said plurality of lighting units.

16. A computer program product for detecting different respective signals embedded in visible illumination emitted by multiple contiguously-mounted lighting units (100) to illuminate an environment (109), wherein each of the multiple lighting units has a data-transmitting luminous area (107) from which the illumination is emitted, with the data- transmitting luminous areas of each adjacent pair (100a, 100b) of the contiguous lighting units having a boundary (301) therebetween; the computer program product comprising code embodied on computer-readable storage con configured so as when run on one or more processing units to perform operations of:

using a camera (112) for capturing one or more images of the data- transmitting luminous areas of a plurality of said multiple lighting units; and

using an image processing module (114) configured to detect the respective signals based thereon;

wherein the code of said computer program product is configured to distinguish between the different signals embedded in the illumination from said plurality of lighting units based on an appearance in said one or more images of: for each respective adjacent pair of said plurality of lighting units, a luminous separator element (501) disposed along some or all of the boundary between the data-transmitting luminous areas of the respective pair of lighting units, adjoining the data-transmitting areas of both of the respective pair of lighting units, the separator element emitting visible light with substantially the same color and intensity as the luminous data-transmitting areas of the respective pair of lighting units, but having a distinguishing property visible to the camera that distinguishes the light emitted by the separator element from the illumination emitted by the luminous data- transmitting areas of the multiple lighting units; wherein said distinguishing comprises detecting said distinguishing property in the one or more images in order to distinguish the light emitted by the separator elements from the illumination emitted by the data transmitting luminous surfaces when the data- transmitting areas of more than one of said plurality of lighting units appear simultaneously in each of the one or more images, and based thereon segmenting each of the one or more images into different spatial regions to distinguish between the different signals embedded in the illumination from said plurality of lighting units.

Description:
Detecting signals embedded in visible light

TECHNICAL FIELD

The present disclosure relates to the embedding of signals in visible light, and to the detection of such signals using a camera. BACKGROUND

Visible light communication (VLC) refers to the communication of information by means of a signal embedded in visible light, sometimes also referred to as coded light. The information is embedded by modulating a property of the visible light according to any suitable modulation technique. E.g. according to one example of a coded light scheme, the intensity of the visible light from each of multiple light sources is modulated to form a carrier waveform having a certain modulation frequency, with the modulation frequency being fixed for a given one of the light sources but different for different ones of the light sources such that the modulation frequency acts as a respective identifier (ID) of each light source. In more complex schemes a property of the carrier waveform may be modulated in order to embed symbols of data in the light emitted by a given light source, e.g. by modulating the amplitude, frequency, phase or shape of the carrier waveform in order to represent the symbols of data. In yet further possibilities, a baseband modulation may be used - i.e. there is no carrier wave, but rather symbols are modulated into the light as patterns of variations in the brightness of the emitted light. This may either be done directly (intensity modulation) or indirectly (e.g. by modulating the mark:space ratio of a PWM dimming waveform, or by modulating the pulse position).

The current adoption of LED technology in the field of lighting has brought an increased interest in the use of coded light to embed signals into the illumination emitted by luminaires, e.g. room lighting, thus allowing the illumination from the luminaires to double as a carrier of information. Preferably the modulation is performed at a high enough frequency and low enough modulation depth to be imperceptible to human vision, or at least such that any visible temporal light artefacts (e.g. flicker or strobe artefacts) are weak enough to be tolerable to humans. Based on the modulations, the information in the coded light can be detected using a photodetector. This can be either a dedicated photocell, or a camera comprising an array of photocells (pixels) and a lens for forming an image on the array. E.g. the camera may be a general purpose camera of a mobile user device such as a smartphone or tablet. Camera based detection of coded light is possible with either a global-shutter camera or a rolling- shutter camera (e.g. rolling-shutter readout is typical to mobile CMOS image sensors found in mobile devices such as smartphones and tablets). In a global- shutter camera the entire pixel array (entire frame) is captured at the same time, and hence a global shutter camera captures only one temporal sample of the light from a given luminaire per frame. In a rolling- shutter camera on the other hand, the frame is divided into lines (typically horizontal rows) and the frame is exposed line-by-line in a temporal sequence, each line in the sequence being exposed at a slightly later time than the last. Thus the rolling-shutter readout causes fast temporal light modulations to translate into spatial patterns in the line-readout direction of the sensor, from which the encoded signal can be decoded. Hence while rolling-shutter cameras are generally the cheaper variety and considered inferior for purposes such as photography, for the purpose of detecting coded light they have the advantage of capturing more temporal samples per frame, and therefore a higher sample rate for a given frame rate. Nonetheless coded light detection can be achieved using either a global-shutter or rolling-shutter camera as long as the sample rate is high enough compared to the modulation frequency or data rate (i.e. high enough to detect the modulations that encode the information).

Coded light has many possible applications. For instance a different respective ID can be embedded into the illumination emitted by each of the luminaires in a given environment, e.g. those in a given building, such that each ID is unique at least within the environment in question. E.g. the unique ID may take the form of a unique modulation frequency or unique sequence of symbols. This can then enable any one or more of a variety of applications. For example if a mobile device for remotely controlling the luminaires is equipped with a light sensor such as a camera, then the user can direct the sensor toward a particular luminaire or subgroup of luminaires so that the mobile device can detect the respective ID(s) from the emitted illumination captured by the sensor, and then use the detected ID(s) to identify the corresponding one or more luminaires in order to control them. This provides a user- friendly way for the user to identify which luminaire or luminaires he or she wishes to control. E.g. the mobile device may take the form of a smartphone or tablet running a lighting control app, with the app being configured to detect the embedded IDs from the captured light and enact the corresponding control functionality. As another example, there may be provided a location database which maps the ID of each luminaire to its location (e.g. coordinates on a floorplan), and this database may be made available to mobile devices from a server via one or more networks such as the Internet and/or a wireless local area network (WLAN). Then if a mobile device captures an image or images containing the light from one or more of the luminaires, it can detect their IDs and use these to look up their locations in the location database in order to detect the location of the mobile device based thereon. E.g. this may be achieved by measuring a property of the received light such as received signal strength, time of flight and/or angle of arrival, and then applying technique such as triangulation, trilateration, multilateration or fingerprinting, or simply by assuming that the location of the nearest or only captured luminaire is approximately that of the mobile device (and in some cases such information may be combined with information from other sources, e.g. on-board accelerometers, magnetometers or the like, in order to provide a more robust result). The detected location may then be output to the user though the mobile device for the purpose of navigation, e.g. showing the position of the user on a floorplan of the building. Alternatively or additionally, the determined location may be used as a condition for the user to access a location based service. E.g. the ability of the user to use his or her mobile device to control the lighting (or another utility such as heating) in a certain region (e.g. a certain room) may be made conditional on the location of his or her mobile device detected to be within that same region (e.g. the same room), or perhaps within a certain control zone associated with the lighting in question. Other forms of location-based service may include, e.g., the ability to make or accept location-dependent payments.

As another example, a database may map luminaire IDs to location specific information such as information on a particular museum exhibit in the same room as a respective one or more luminaires, or an advertisement to be provided to mobile devices at a certain location illuminated by a respective one or more luminaires. The mobile device can then detect the ID from the illumination and use this to look up the location specific information in the database, e.g. in order to display this to the user of the mobile device. In further examples, data content other than IDs can be encoded directly into the illumination so that it can be communicated to the receiving device without requiring the receiving device to perform a look-up.

Thus the use of a camera to detect coded light has various commercial applications in the home, office or elsewhere, such as a personalized lighting control, indoor navigation, location based services, etc. Typically for such applications the so-called front-facing camera of the smartphone is used (the camera on the same face as the device's main screen, typically a touchscreen). Thus the camera directly captures the luminaires on the ceiling above the user while also keeping the device's screen suitably orientated to be viewed by the user. Figures 2a and 2b show an example of a lighting system composed of adjacent luminaires in the form of ceiling tiles. Figure 2a shows the humanly visible appearance - to the human user the fast modulation of the coded light is imperceptible and the light intensity appears constant. Figure 2b on the other hand shows the appearance as captured by a rolling shutter camera under short exposure capture (with the dashed line indicating the rolling-shutter readout direction). Here the coded light modulation appears as spatial patterns in each of the luminaires, each of which associated with a different specific code, e.g. different respective ID. In the example shown the capture is by a rolling-shutter camera such that the message from each luminaire appears as a different spatial pattern in the captured image. However it will be appreciated that capture with a global-shutter camera is also possible, in which case the modulation is captured as a temporal modulation over multiple frames (and in fact with a rolling-shutter camera, in some cases the pattern from multiple frames may be stitched together).

In other forms of wireless data communication, 'channel separability' has to be implemented by mathematical signal orthogonality, e.g. the use of sine waves of different frequency, or more generally frequency multiplexing; or else by the use of a transmission protocol, e.g. use of repeated transmission using randomized packet intervals (the so-called ALOHA protocol). But when multiple luminaires simultaneously fill the field of view of the camera, such that multiple luminaires emitting different signals are captured in the same frame, then image-based segmentation can be used to separate the different luminaires prior to decoding of the information embedded in the coded light. I.e. camera based detection of coded light has the advantage that when light is received simultaneously from multiple coded light sources, it is also received with spatial separation between the light from the different sources, because this light appears in different spatial regions of the image separated by a recognizable gap or division in between (e.g. see again Figure 2a). The image-based segmentation essentially provides a form of channel separation among multiple signals that might be difficult or impossible to decode otherwise. Therefore, concurrent detection of multiple coded light sources does not have to rely on 'channel separability' as an inherent characteristic of the signals themselves. SUMMARY

However, for aesthetic reasons it is common to concatenate multiple luminaires in order to form one single luminous surface without any visible transitions between the composing luminaires. Presently this is at odds with the desire to include a visible separation between luminaires in order to assist detection of coded light with a camera, where it can be necessary to first identify the separate composing luminaires before decoding the data embedded in the coded light from each. It would be desirable to provide the illusion of a uniform luminous surface to a human whilst still enabling segmentation of the individual composing luminaires to enable decoding of the individual embedded coded light signals. More generally, similar considerations may arise in relation to the concatenation of any lighting units, whether luminaires or individual lamps emitting different signals within the same luminaire.

The present invention solves the above problem by providing a separating element that is detectable by the camera system but invisible to the human eye, preserving the desired uniform appearance of the light source. Particularly, this is achieved by a separating luminous element with a temporal characteristic that sets it apart from any of the data transmitting coded light sources.

According to one aspect disclosed herein, there is provided a system comprising: multiple contiguously-mounted lighting units arranged to emit visible illumination to illuminate an environment, transmission circuitry arranged to modulate the visible illumination emitted by each of the multiple lighting units so as to embed a respective signal into the visible illumination emitted by each of the lighting units, and detecting equipment for detecting the signals. The multiple lighting units are mounted contiguously in a plane, along a straight line or following a one or two dimensional contour (a surface or envelope that is curved or at least not flat in one or two dimensions). Each of the multiple lighting units has a data-transmitting luminous area from which the illumination is emitted, with the data-transmitting luminous areas of each adjacent pair of the multiple lighting units having a boundary therebetween. The detecting equipment comprises a camera for capturing one or more images of the data-transmitting luminous areas of a plurality of said multiple lighting units, and an image processing module configured to detect the respective signals based thereon. For each respective instance of said adjacent pairs of lighting units, the system further comprises a luminous separator element disposed along some or all of the boundary between the data-transmitting luminous areas of the respective pair of lighting units. The separator element adjoins the data-transmitting areas of the respective pair of lighting units and is configured to emit light with substantially the same color and intensity as the luminous data-transmitting areas of the respective pair of lighting units, thereby giving a consistent appearance to human viewers in the environment. However, the light emitted by the separator element also has a distinguishing property visible to the camera (but preferably not humans), that distinguishes the light emitted by the separator element from the illumination emitted by the luminous data-transmitting areas of the multiple lighting units; wherein the image processing module of the detecting equipment is configured to distinguish between the different signals embedded in the illumination from said plurality of lighting units based on the separator elements appearing in said one or more images.

That is, the image processing module of the detecting equipment is configured to detect said distinguishing property in the one or more images in order to distinguish the light emitted by the separator elements from the illumination emitted by the data-transmitting luminous surfaces when the data-transmitting areas of more than one of said plurality of lighting units appear simultaneously in each of the one or more images, and based thereon to segment each of the one or more images into different spatial regions to distinguish between the different signals embedded in the illumination from said plurality of lighting units.

The separator elements may be a part of the lighting units, integrated within one or both of each adjacent pair of lighting units along some or all of the bounding edge(s), e.g. built into the same housing. Alternatively the separator elements may be separator units separate from the lighting units, placed between them.

In embodiments said distinguishing property may comprise a modulation in the visible light emitted by the separator elements with a different modulation spectrum than the modulation of the illumination from the multiple lighting units. Preferably there is substantially no overlap between the modulation spectrum of the illuminating form the lighting units and the light from the separators. E.g. the separators and the transmitting circuitry of the lighting units may be configured so that, by signal energy, no more than the top P% of the signal energy in the modulation spectrum of the illumination from any of the lighting units overlaps with no more than the lower P% of the signal energy in the

modulation spectrum of the light emitted by any of the separators, or vice versa; where P may be for example 5, or 1. For instance the modulation of the light emitted by the separators may be a lone sinusoid with a frequency higher than the (100-P) th percentile, by signal energy, of the modulation spectrum of the illumination from each of the lighting units.

In alternative embodiments, the distinguishing property may be that the light from the visible separators is not modulated (i.e. is instead constant in time). As another possibility, the distinguishing property may comprises a component in an infrared or ultraviolet part of the electromagnetic spectrum.

In embodiments said distinguishing property may be the same for all of the separator elements. Alternatively the distinguishing property differs for different ones or subgroups of the separator elements.

For instance, the difference in said property between different ones or subgroups of the separator elements may indicates different directions, and the image processing module may be configured to determine an orientation of the camera with respect to the indicated directions based on the appearance of one or more of the separator elements in the captured images. As another example, the difference in said property between different ones or subgroups of the separator elements may indicate different subregions within the environment, and the image processing module may be configured to determine a location of the camera with respect to the subregions based on the appearance of one or more of the separator elements in the captured images.

Such features can advantageously be used to assist navigation, And/or, in embodiments where the signal from each lighting unit comprises an address of the respective lighting unit and the image processing module is being used to detect the address of one of the lighting units, then the differentiation between different subregions provided on the separators may be used to determine which subregion the camera is viewing so as to narrow down an address range and thereby speed up detection of the address.

In embodiments the system may comprise a diffuser layer covering the data- transmitting light-emitting areas and the separator elements (i.e. disposed between the light- emitting elements and the environment, such that a person in the environment views the light from the lighting units and separators through the diffuser). This helps give a consistent appearance to human viewers. Preferably the diffuser layer is continuous across all the data- transmitting luminous areas and separator elements. Preferably the diffuser layer is uniform across all the data-transmitting luminous areas and separator elements.

In embodiments, for each of the adjacent pairs of lighting units, the system may comprise two luminous separator elements, one along an edge of each of the respective pair of lighting units, and the light emitted by the two separator elements may be arranged to mix to produce a combined signal. The modulation of the light emitted by the separator elements may not be synchronized so that the combined signal is different than if the two separator elements were emitting individually. In this case the image processing module of the detecting equipment may be configured to distinguish between the different signals based on the combined signal. Alternatively the modulation emitted by the two separator element may be synchronized.

Either way, in embodiments where a diffuser layer covers the two separator elements, this will aid the mixing.

In a particular implementation, each of the lighting units may be comprised by a respective luminaire comprising a plurality of LED groups (e.g. LED boards), wherein a first one or more of said LED groups may be arranged to emit said illumination and a second one or more of the LED groups may be arranged to act as said separator elements. Further, within each luminaire, current from a same driver circuit may be split between a branch driving the one or more first LED and a branch driving the one or more second LED groups, and the transmission circuitry may be arranged to inject the signal into only the branch driving the first LED group and not the second LED group.

According to another aspect disclosed herein, there is provided lighting equipment comprising: multiple contiguously-mounted lighting units arranged to emit visible illumination to illuminate an environment; transmission circuitry arranged to modulate the visible illumination emitted by each of the multiple lighting units so as to embed a respective signal into the visible illumination emitted by each of the lighting units, wherein each of the multiple lighting units has a data-transmitting luminous area from which the illumination is emitted, with the data-transmitting luminous areas of each adjacent pair of the multiple lighting units having a boundary therebetween; and for each respective instance of said adjacent pairs of lighting units, a luminous separator element disposed along some or all of the boundary between the data-transmitting luminous areas of the respective pair of lighting units, adjoining the data-transmitting areas of the respective pair of lighting units, the separator element being configured to emit visible light with substantially the same color and intensity as the luminous data-transmitting areas of the respective pair of lighting units, but having a distinguishing property visible to the camera that distinguishes the light emitted by the separator element from the illumination emitted by the luminous data-transmitting areas of the multiple lighting units.

According to another aspect disclosed herein, there is provided a detecting equipment for detecting different respective signals embedded in visible illumination emitted by multiple contiguously-mounted lighting units to illuminate an environment, wherein each of the multiple lighting units has a data-transmitting luminous area from which the illumination is emitted, with the data-transmitting luminous areas of each adjacent pair of the contiguous lighting units having a boundary therebetween; the detecting equipment comprising: a camera for capturing one or more images of the data-transmitting luminous areas of a plurality of said multiple lighting units; and an image processing module configured to detect the respective signals based thereon; wherein the image processing module of the detecting equipment is configured to distinguish between the different signals embedded in the illumination from said plurality of lighting units based on an appearance in said one or more images of: for each respective adjacent pair of said plurality of lighting units, a luminous separator element disposed along some or all of the boundary between the data-transmitting luminous areas of the respective pair of lighting units, adjoining the data- transmitting areas of the respective pair of lighting units, the separator element emitting visible light with the same color and intensity as the luminous data-transmitting areas of the respective pair of lighting units, but having a distinguishing property visible to the camera that distinguishes the light emitted by the separator element from the illumination emitted by the luminous data-transmitting areas of the multiple lighting units.

According to another aspect disclosed herein, there is provided a method of emitting illumination comprising: using multiple contiguously-mounted lighting units to emit visible illumination to illuminate an environment; modulating the visible illumination emitted by each of the multiple lighting units so as to embed a respective signal into the visible illumination emitted by each of the lighting units, wherein each of the multiple lighting units has a data-transmitting luminous area from which the illumination is emitted, with the data- transmitting luminous areas of each adjacent pair of the multiple lighting units having a boundary therebetween; and for each respective instance of said adjacent pairs of lighting units, using a luminous separator element disposed along some or all of the boundary between the data-transmitting luminous areas of the respective pair of lighting units, adjoining the data-transmitting areas of the respective pair of lighting units, to emit visible light with substantially the same color and intensity as the luminous data-transmitting areas of the respective pair of lighting units, but having a distinguishing property visible to the camera that distinguishes the light emitted by the separator element from the illumination emitted by the luminous data-transmitting areas of the multiple lighting units.

According to another aspect disclosed herein, there is provided a method of detecting different respective signals embedded in visible illumination emitted by multiple contiguously-mounted lighting units to illuminate an environment, wherein each of the multiple lighting units has a data-transmitting luminous area from which the illumination is emitted, with the data-transmitting luminous areas of each adjacent pair of the contiguous lighting units having a boundary therebetween; the method comprising: using a camera for capturing one or more images of the data-transmitting luminous areas of a plurality of said multiple lighting units; and using an image processing module configured to detect the respective signals based thereon; wherein the detection comprises distinguishing between the different signals embedded in the illumination from said plurality of lighting units based on an appearance in said one or more images of: for each respective adjacent pair of said plurality of lighting units, a luminous separator element disposed along some or all of the boundary between the data-transmitting luminous areas of the respective pair of lighting units, adjoining the data-transmitting areas of the respective pair of lighting units, the separator element emitting visible light with substantially the same color and intensity as the luminous data-transmitting areas of the respective pair of lighting units, but having a distinguishing property visible to the camera that distinguishes the light emitted by the separator element from the illumination emitted by the luminous data-transmitting areas of the multiple lighting units.

According to another aspect disclosed herein, there is provided a computer program product for detecting different respective signals embedded in visible illumination emitted by multiple contiguously-mounted lighting units to illuminate an environment, wherein each of the multiple lighting units has a data-transmitting luminous area from which the illumination is emitted, with the data-transmitting luminous areas of each adjacent pair of the contiguous lighting units having a boundary therebetween; the computer program product comprising code embodied on computer-readable storage con configured so as when run on one or more processing units to perform operations of: using a camera for capturing one or more images of the data-transmitting luminous areas of a plurality of said multiple lighting units; and using an image processing module configured to detect the respective signals based thereon; wherein the code of said computer program product is configured to distinguishing between the different signals embedded in the illumination from said plurality of lighting units based on an appearance in said one or more images of: for each respective adjacent pair of said plurality of lighting units, a luminous separator element (501) disposed along some or all of the boundary between the data-transmitting luminous areas of the respective pair of lighting units, adjoining the data-transmitting areas of the respective pair of lighting units, the separator element emitting visible light with substantially the same color and intensity as the luminous data-transmitting areas of the respective pair of lighting units, but having a distinguishing property visible to the camera that distinguishes the light emitted by the separator element from the illumination emitted by the luminous data-transmitting areas of the multiple lighting units. BRIEF DESCRIPTION OF THE DRAWINGS

To assist understanding of the present disclosure and to show how embodiments may be put into effect, reference is made by way of example to the

accompanying drawings in which:

Fig. 1 is a schematic block diagram of a system comprising a luminaire and a detecting device;

Fig. 2a is an image of an arrangement of luminaires;

Fig. 2b is an image of the arrangement of Fig. 2a captured by a rolling shutter camera, including a rolling pattern due to codes embedded in the illumination emitted by the luminaires;

Fig. 3a is an image of an arrangement of luminaires;

Fig. 3b is an image of another arrangement of luminaires;

Fig. 4a is a sketch of a large area luminous ceiling arrangement; Fig. 4b is a sketch of another luminous ceiling;

Fig. 5 is an image of an arrangement of luminaires with luminous separator elements between adjacent luminaires;

Fig. 6a is a schematic illustration of a pair of luminaires with luminous separator element therebetween;

Fig. 6b is a schematic illustration of a luminaire with luminous separator elements either side;

Fig. 6c is a schematic illustration of a pair of luminaires with two separator elements therebetween, one along each;

Fig. 7 illustrates a modulation spectrum of light emitted from a separator element and a modulation spectrum of illumination emitted from a luminaire;

Fig. 8a is an image of an arrangement of multiple luminaires with separator elements therebetween, giving the appearance to a human viewer of a continuous, uniform luminous surface;

Fig. 8b is an image captured by a rolling shutter camera of the arrangement of Fig. 8a, including a rolling pattern due to codes embedded in the illumination emitted by the luminaires and the light emitted by the separator elements;

Fig. 9 is another image captured by a rolling shutter camera of an arrangement of luminaires emitting coded light and separated by luminous separator elements; Fig. 10 provides a schematic plan view and side view of an arrangement of concatenated luminaires with luminous separators;

Fig. 11 is a schematic representation of an image captured by a rolling-shutter camera after segmentation based on the presence of modulation;

Fig. 12 is a schematic block diagram of an implementation for integrating luminous separator elements into each luminaire; and

Fig. 13 is a schematic block diagram of a luminaire having luminous separator elements integrated at each end. DETAILED DESCRIPTION OF EMBODIMENTS

Given a fixed, limited bandwidth of the receiver (in the present case the camera), then if channel separability is imposed as an additional inherent property of the signal, this extra constraint comes at the cost of a lower average transmission bitrate. For a given modulation, there is always a trade-off between bandwidth and bit rate. For example, within a bandwidth of 8kHz, one could send one 8 kHz data stream of n bit/s or two frequency-multiplexed 4kHz data streams of n/2 bit/s and so on. Or many coded light transmitters (including embodiments herein) may use a base band modulation, i.e. not modulating a carrier. One would have to use time division or code division multiplexing techniques to provide separate channels and these come with similar trade-offs. Thus, to implement different channels means a lower symbol rate. Moreover, in some scenarios the camera may only capture part of the signal with each frame, in which case capture time changes in a very non-linear fashion. This means that if one halves the bit rate, the capture time could be more than doubled.

Fortunately, the capability of a camera to discriminate between multiple light sources on the basis of their different spatial location releases the need for the extra signal constraint. However reliance on the (then necessary) image-based light segmentation imposes new constraints with regard to the appearance of the lighting system. Often, for aesthetic reasons, the visibility of a border between adjacent luminaires is not desirable as it ruins the illusion of one large single uniform luminaire extending over a long distance in one or two dimensions.

Figure 3a-3b illustrate examples of a lighting system consisting of multiple adjacent differently encoded luminaires employing visible separations 301 between the data- transmitting luminous surfaces 107 of the luminaires 100, here showing up as dark lines between the ceiling tiles (Figure 3 a) or dark interruptions in the trunk lights (Figure 3b). The luminous surfaces 107 of the luminaires 100 are separated by clearly visible borders 301, some of which are indicated by the dotted ellipses superimposed on Figures 3 a and 3b for illustrative purposes. This visible separation 301 aids the image processing module of the detecting equipment to separate out the different signals from different luminaires, but it also spoils the aesthetics for human users in the illuminated environment. Figure 3 a shows a system of luminous ceiling tiles 100, where the illusion of a single uniform surface is ruined by the framing elements 301. Figure 3b shows a system of trunk lights; here, separations 301 between adjacent luminaires 100 are ruining the illusion of long uniform light strips.

Figures 4a and 4b show examples of how it may be desirable for a large-scale uniform lighting system to appear to a human viewer, with no visible join or gap 301 between adjacent luminous surfaces 107 of adjacent luminaires 100. For example such an arrangement may be used to provide a large-size uniform lighting system in the form of a one or two dimensional array of ceiling tiles with continuous, uniform appearance, e.g. such as to provide a so-called luminous ceiling. Figure 5 provides a simulation of how it might be desirable for the arrangement of Figure 3b to appear to a human viewer, with no visible gap or join 301 between adjacent trunk lights.

It would be desirable to provide the illusion of a continuous, uniform luminous surface while still enabling segmentation of the individual composing luminaires 100 to enable decoding of the individual embedded coded light signals.

The present invention is based on the observation that within one single frame captured by the camera, or within two or three frames, the appearance of the encoded light signal may contain insufficient unique characteristics to identify the signal as being from one single specific light source 100. As such, in the presence of a second coded light source, the local sum of the signals is not easily recognized as two distinct differently encoded light contributions. However, with suitable spatial separation then the particular modulation signals can be detected, sometimes even only requiring one or a few camera frames to identify their presence.

In embodiments the present invention facilitates camera based light separation by emitting a unique coded light signal with signal characteristics that: (a) allow instant detection, i.e. requiring one single or a few frames; (b) enable robust detection among other coded light signals, in the presence of other sources of interference (e.g. mains ripple), or under partial occlusion; and (c) preserve detectability of adjacent encoded luminaires.

Figures 3b and 5 show a simulated effect of the invention on the visible appearance of concatenated luminaires 100. Figure 3b shows a system of trunk lights with visible separations 301 between adjacent luminaires. Figure 5 shows the same system of trunk lights without visible separation between adjacent luminaires. Figures 3a and 8a show a similar simulated effect for a two dimensional array of ceiling tiles. Figure 3a shows a system of concatenated luminous ceiling tiles 100 with visible separations 301 therebetween, whilst Figure 8a shows the same system of ceiling tiles without visible separation.

The invention comprises two elements: (a) a luminous separator element 501 as part of a lighting system comprising multiple luminaires 100, the separator element 501 having the same visual intensity as the adjacent luminaires yet having a distinct light output, e.g. a distinct modulation; and (b) a detector capable of detecting one or more luminous separator elements 501 from one or more camera images, preferably implemented as an application running on a programmable mobile camera platform such as a smartphone or tablet.

Some example embodiments are now discussed in relation to Figure 1 and Figures 3a to 10.

Figure 1 shows an example of a luminaire 100 for emitting coded light and detecting equipment 110 for detecting coded light in accordance with embodiments of the present disclosure. The luminaire 100 is mounted on a supporting surface 101, typically the ceiling (though this could instead be another surface such as a wall). The luminaire 100 may be mounted on the supporting surface 101 by being affixed over the supporting surface 101 (as illustrated) or by being embedded in the surface (a portion of the supporting surface 101 being cut away to accommodate the luminaire 100). Either way, the luminaire 100 is mounted so as to emit visible illumination outward from the supporting surface 101 into an environment 109 in order to contribute to illuminate that environment 109 (so as to enable human occupants to see and find their way about within the environment). The environment 109 in question may be an indoor space such as one or more rooms of an office, home or retail space; or may be an outdoor space such as a park or garden; or a partially covered space such as a stadium or gazebo.

The luminaire 100 comprises one or more light-emitting elements 108 disposed in or on a luminaire body 102 in the form of a housing or support (e.g. frame) arranged to house and/or support the light-emitting elements 108. The light-emitting elements 108 may be implemented in one or more lamps (with one or more of the light-emitting element 108 per lamp), wherein the (or each) lamp may be a removable and replaceable component to be plugged into the luminaire 100. Whatever form they take, the light-emitting elements 108 are arranged to actively emit the above-mentioned illumination into the environment 109, being disposed on an outward- facing luminous surface 107 (luminous area) of the luminaire body 102 (a surface facing the environment 109). The luminous surface 107 may refer herein to the surface formed by the outward- facing surface of light-emitting elements 108 themselves and the surface of the luminaire body 102 in between them (which is typically substantially reflective, either in the sense of mirror reflection or diffuse reflection). Or optionally, the luminaire 105 may comprise a diffuser 105 disposed over the light-emitting elements 108 (between the luminous elements 108 and the environment 109), in which case the luminous surface 107 (luminous area) may be considered the outward facing surface of the diffuser 105 (i.e. the surface facing the environment 109) through which the illumination form the light-emitting elements 108 emit their illumination. Either way, the illumination from the light-emitting elements 108 is modulated to embed a signal, as will be discuses in more detail shortly, such that the luminous surface 107 thus becomes a data- transmitting luminous surface of the luminaire 100.

Each of the light-emitting elements 108 may take any suitable form such as an LED, a set of LEDs, or a filament bulb. The luminaire 100 further comprises a driver 106 coupled to the light-emitting element 108, and a controller 104 coupled to the driver 106. The driver 106 is arranged to supply power from a power source (not shown) to the light-emitting elements 108 in order to cause them to actively emit the illumination. By "actively" emit herein it is meant that the luminaire 100 has or is connected to a power supply (not shown) which supplies energy in a form other than light (typically electricity), and the driver 106 supplies this energy to the light-emitting elements 108 to convert into the illumination which is sent out into the environment 109. I.e. the emitted illumination is generated by the luminaire 100 (as opposed to passive absorption and re-emission of ambient light).

Furthermore, the controller 104 is arranged to control the driver 106 to vary a property of the illumination emitted by the light-emitting elements 108, typically the intensity, in order to thereby modulate the illumination and thereby embed a signal in accordance with coded light techniques which are themselves already known in the art.

The controller 104 may be implemented in the form of software stored in memory of the luminaire 100 and arranged to run on a processor of the luminaire 100 (the memory in which the controller 104 is stored comprising one or more memory units and the processor on which it is arranged to run comprising one or more processing units).

Alternatively the controller 104 may be implemented in dedicated hardware circuitry, or configurable or reconfigurable hardware circuitry such as a PGA or FPGA, or any

combination of software and hardware. The detecting equipment 110 comprises a camera 112 and an image processing module 114. The camera 112 is able to capture samples of the modulated illumination at different instances in time. The camera 112 may take the form of a rolling- shutter camera which exposes a given frame line-by-line in a temporal sequence, each line at different moment in time, so as to capture multiple different temporal samples of the modulation in the illumination within a given frame (a given still image). Alternatively the camera 112 may take the form of a global shutter camera which exposes the entire frame at the same time, in which case each frame samples the modulation in the illumination at a different respective time. Note also that even in the case of a rolling-shutter camera, if the message encoded into the signal lasts longer than one frame, then samples from multiple frames may be required. By whatever means the samples are captured, the camera 112 is arranged to output the samples to the image processing module 114 in order for the signal to be decoded from the captured samples, using techniques which are in themselves already known in the art.

The image processing module 1 14 may be implemented in the form of software stored in memory of the detecting equipment 110 and arranged to run on a processor of the detecting equipment 110 (the memory in which the image processing module 114 is stored comprising one or more memory units and the processor on which it is arranged to run comprising one or more processing units). Alternatively the image processing module 114 may be implemented in dedicated hardware circuitry, or configurable or reconfigurable hardware circuitry such as a PGA or FPGA, or any combination of software and hardware.

The detecting equipment 110 may take the form of a mobile user terminal such as a tablet, smartphone or smartwatch, and the camera 112 may be an integrated camera of the mobile user terminal with the image processing module 114 also being implemented on the same mobile user terminal (e.g. as a suitable light detection "app"). For example the user terminal may be a smartphone or tablet and the camera 112 may be the front-facing camera of the smartphone or tablet. Alternatively the camera 112 may be implemented on a separate physical unit than the image processing module. E.g. the camera 112 may be implemented on a dedicated camera unit or camera peripheral or on a smartphone, tablet or smartwatch, while the image processing module may be implemented on a separate computer unit such as a server, desktop computer or laptop computer, connected to the unit housing the camera 112 via any suitable wired or wireless connection, e.g. a wired connection such as a USB connection, or a wireless connection such as a Wi-Fi or Bluetooth connection, or via a wired or wireless network such as a wireless local area network (e.g. Wi-Fi network) and/or a wired area network or internetwork such as the Internet.

Only one luminaire 100 is shown in Figure 1, but in fact an array of multiple such luminaires 100 (preferably three or more) are concatenated together to form a lighting system, e.g. as shown in Figure 2a. Each of the multiple luminaires 100 is mounted on the supporting surface 101 in a similar manner as described above. Alternatively the luminaires 100 do not have to be mounted on a supporting surface 101, but instead may be mounted to another structure (e.g. a gantry) so as themselves to define a surface or line. Either way, the surface or line may be constrained to a plane or straight line respectively, i.e. which is flat, or may follow a one or two dimensional contour which is curved either in one or two dimensions. Typically this plane or contour is that of the ceiling or possibly a wall, and typically the ceiling or wall is flat (a plane), but it is also not excluded that the multiple luminaires 100 could be arranged to confirm to a surface contour, e.g. a curved ceiling or wall, or a curved gantry structure. Note also that the array of luminaires 100 may be one or two dimensional, such that in some embodiments the array extends in both dimensions of the plane or a two-dimensional contour, whereas in some other embodiments the luminaires 100 may be arranged in a straight along a curved line.

Either way, the luminaires 100 are concatenated together in the ID or 2D array such that, within said plane or contour, or along said line, each of the multiple luminaires 100 is adjacent to at least one other of the multiple luminaries 100, sharing a boundary between them. For example the luminaires 100 may be arranged in a 2D rectangular array as shown in Figure 2, or a ID linear array as shown in Figure 3b.

Further, each of the luminaires 100 is arranged to emit a different respective coded light signal embedded in its respective illumination, for example a unique ID code identifying the respective luminaire within the system, and/or other information specific to the luminaire in question such as a respective location, time stamp and/or status report (e.g. reporting burning hours and/or operating temperature, etc.). Note therefore that the signal is not limited to comprising only an ID code. The signals may originate from the respective controllers 104 of the different respective luminaires 100, being generated locally by each in a distributed fashion, or alternatively each controller 104 may generate its signal under control by a central controller (not shown) such as a server. In the latter case the central controller may be connected to the local controller 104 of each of the luminaires 100 by any suitable wired or wireless connection, e.g. via a wireless local area network (WLAN) such as a Wi-Fi, Bluetooth or ZigBee network, or via a wide area network or internetwork such as the Internet.

Also, referring again to Figures 3a and 3b, there is a separation 301 between each adjacent pair of the multiple luminaires 100. The image processing module 114 is configured with an image recognition algorithm which is able to use the appearance of the separation 301 between adjacent luminaires in the array to distinguish between the signals from the different luminaires 100, again using techniques which are in themselves known in the art. Thus even if the different coded light signals are embedded at the same modulation frequency and have no other form of channel separation (e.g. do not use time division or code division multiple access), then the image processing module 114 is able to separate out the individual signals from different individual ones of the multiple luminaires 100.

Note that the separation 301 allows the light from the different luminaires, and therefore the signals, to be distinguished from one another even when more than one of the luminaires 100 appear simultaneously in the same image or simultaneously in each of a sequence of images capturing the same view of the environment, i.e. even when appearing in the same frame or together in each of the same sequence of frames (that is, falling at the same time in the same frame area or image capture area of the camera's image sensor, from which the images are captured). In the case of a rolling-shutter camera, this may even allow detection of the signals from more than one luminaire appearing simultaneously in the same frame (same single still image).

As discussed previously, it would be desirable if the separation 301 between adjacent luminaires 100 was not visible to human occupants of the environment 109, but was still visible to the image processing module 114 via the images captured by the camera 112 in order to distinguish between the different coded light signals.

Figure 6a gives a schematic side view of an arrangement to address this desire.

The arrangement comprises a pair of adjacent, concatenated luminaires 100a, 100b, having adjacent edges with a boundary region 301 comprising a gap between the two luminaires 100a, 100b. But to hide this boundary region to human viewers in the environment 109, a luminous separator element 501 is disposed in the gap, running along the full length of the luminaires' adjacent edges (in the direction perpendicular to the page, i.e. in the plane or contour of the luminaires 100a, 100b). The luminous separator element 501 comprises one or more light-emitting elements 108', arranged to emit light into the environment 109 with the same intensity (power per unit area) at least in the visible spectrum, and the same color (same spectral composition at least in the visible part of the EM spectrum), as the luminous surfaces 107 of the pair of adjacent luminaires 100a, 100b which it separates. Furthermore, the luminous separator element adjoins with the adjacent edges of the pair of luminaires 100a, 100b, i.e. substantially abuts with each of those edges such that to an average human viewer in the environment 109 the luminous areas 107 of the pair of luminaires 100a, 100b and the luminous separator element 501 together give the appearance of a single, unbroken luminous surface.

However, the luminous separator element 501 is also arranged such that the light it emits comprises a property that is visible only to the camera 112, and is not visible or at least not noticeable to a human viewer 109 in the environment 109 with average visual acuity and attentiveness. The image processing module 1 14 is configured to separate out the different coded light signals from the adjacent luminaires 100a, 100b based on the

appearance of the distinct property of the separator element 501 in one or more images of the luminaires 100a, 100b captured by the camera 112. But to a person, the aesthetic of a continuous, luminous surface is unbroken.

The visibility or noticeability to an average person for the present purposes may be defined in any of a number of ways. E.g. the separation 301 between adjacent luminaires 100a, 100b may be arranged so as to be invisible to a defined percentile from a representative random sample of people (e.g. no more than 1% of the most perceptive people, or no more than 5%). The sample may or may not be restricted to a certain category of people, e.g. intended users of the environment 109, and/or those who are not classified as partially sighted in the jurisdiction in which the environment 109 is located. Alternatively or additionally, the coded light modulation may be designed to meet a different criterion for being invisible. For example, it may be designed such that no more than 1% or no more than 5% of the signal energy falls below a certain threshold (e.g. 60Hz or 100Hz) in the modulation power spectrum. Various criteria for designing a coded light to be invisible to human visual perception will in themselves be known to a person skilled in the art.

Furthermore, in terms of the overall intensity of the light emitted by the luminous separator element 501, this is substantially the same as the intensity emitted by the neighboring luminaires 100a, 100b, in that they are similar enough that any difference in intensity is not perceivable to a user. The perception here may be quantified in terms of the concept of "just noticeable difference" (JND), also known as "Weber's Law". As will be familiar to a person skilled in the art, Weber's Law states that in the presence of ambient illumination, a temporally stable intensity variation is just noticeable when contrast is about:

This ratio of 1/100 is remarkably stable for a large range of illumination levels. In the context of the present disclosure, I is the output of the luminaire 100 and ΔΙ is the amount with which the light output of the luminous boundary 501 is allowed to differ (i.e. ΔΙ/Ι < 1/100). With regard to the color spectrum, a similar condition may be applied for the color channels in any suitable color space, e.g. in YUV space the chrominance channels U and V may be arranged to satisfy AU/U < 1/100 and AV/V < 1/100; or in RGB space, AR/R < 1/100, AG/G < 1/100 and ΔΒ/Β < 1/100. Weber's Law is known by many in the field of human factors and ergonomics.

Alternatively or additionally, in other embodiments the luminous separator

501 may be designed to satisfy another criterion for invisibility relative to the luminaires 100a, 110b. For instance, apart from Weber's Law there also exist other criteria that relate to the apparent size of the brightness variation. These are based on the average human contrast sensitivity related to the dimensions of the contrast variation (e.g. expressed as a contrast threshold expressed as a function of minutes of arc). See for example, "Outdoor Lighting: Physics, Vision and Perception", Duco Schreuder, Springer Science 2008, ISBN: 978-1- 4020-8601-4. In general, a skilled person will be aware of various criteria for defining invisibility to human perception.

Also, in embodiments, the coded light and the difference between the luminous separator 501 and luminaires 100 need only be invisible from the perspective of a user standing on the floor whilst the luminaires 100 and luminous separator 501 are mounted at ceiling height in the environment 109 in question (e.g. 5m or more from floor height in a large retail environment).

Note that Figure 6a shows only a pair of adjacent luminaires 100a, 100b and the respective separator element 501 in between, but more generally the system may comprise an array of two or more such luminaires 100, and preferably three or more, with each being adjacent to at last one other in the array and with a luminous separator element 501 disposed in between each adjacent pair in a similar manner as described above.

Optionally, to aid the continuity of the appearance, a diffuser layer 105' may be disposed continuously and uniformly over all the luminaires 100a, 100b, 100 and the separator elements 501 (rather than an individual diffuser 105 per luminaire 100). The diffuser layer 105' is disposed between the light emitting elements 108, 108' and the environment 109, over the outward surfaces of the luminaires 100 and separator elements 501, such that both the illumination from the luminous surfaces 107 and the light from the separator elements 501 are viewed through the diffuser layer 105' when viewed from the environment 109.

Note also that the separator elements 501 may be integrated with a respective one of the luminaires 100, e.g. incorporated into its housing. For instance see the example of Figure 10. Alternatively each of the separator elements 501 may be a distinct unit in its own right, not integrated with either of the luminaires 100 it separates but simply being placed between them. Either way the light-emitting elements 108' of the separator elements 501 may be arranged to have their own driver and/or controller (not shown); or may be arranged to be driven by the same driver 106 of one of the respective pair of adjacent luminaires 100a, 100b, and/or controlled to emit its light (including being controlled to include said distinguishing property in its light) by the same controller 104 which controls one of the respective adjacent luminaires 100. In the case of a separate controller, this may again be implemented in the form of software stored in memory of the separator element 501 and arranged to run on a processor of the separator element 501; or may be implemented in dedicated hardware circuitry, or configurable or reconfigurable hardware circuitry such as a PGA or FPGA; or any combination of software and hardware. Note further that the controller of the separator element 501 may or may not be controlled from another, remote control controller (e.g. from a server or use terminal). Any of the options discussed above in relation to Figure 1 for implementing a controller 104 to embed a signal into the illumination emitted by a luminaire 100 may also apply to implementing a controller to include a separating property in the light emitted by the separator element 501.

There are a number of options for implementing the invisible separating property of the light emitted from the separator elements 501.

In a preferred embodiment, the luminous separator element 501 are all configured to emit their light with a modulation that is different from that of the illumination emitted by any of the luminaires 100 in the system, and with the light modulation

characteristics of the emitted light being the same for all the luminous separator elements 501. This light modulation is a signal with a modulation frequency spectrum that does not coincide with the modulation frequency spectrum that is characteristic to the luminaire specific modulation carrying the luminaire specific data (i.e. the modulation spectrum of the modulation in the illumination emitted by the data-transmitting luminous surfaces 107). For example, the separator signal may be a narrowband signal that could even be a sine wave with a fixed frequency, characteristic to all separators 501 in the lighting system. This is illustrated in Figure 7, where the dotted curve represents the average power spectrum of the modulation in the illumination emitted by the data-transmitting luminous surface 107 of one (or each) of the luminaires 100, and the solid curve represents the average power spectrum of the modulation in the light emitted by one (or each) of the separator elements 501. As shown in Figure 7, the separator specific light modulation is outside the average luminaire specific light modulation spectrum.

Thus the modulation spectrum of the illumination from the luminous surfaces 107 of the luminaires 100 are arranged have a substantially different center or peak frequency than that of the light emitted by the separator elements 501 (the peak frequency being the frequency ay which maximum signal energy occurs). These spectra are area also arranged so that they don't substantially overlap with one another. "Substantially" means in the sense that the difference is sufficient for the image processing module 114, based on the image(s) captured by the camera 112, to distinguish the light from the separators 501 from that of the luminaires' luminous surfaces 107. For example, the system may be arranged such that no more than P% of the signal energy in the upper part of the modulation spectrum of the illumination emitted by the luminous surfaces 107 of the luminaires 100 overlaps with no more than P% of the signal energy in the lower part of the modulation spectrum of the light emitted by the separator elements 501 (or vice versa), where P may for example be 1, 5, 10 or 20. And/or, the system may be arranged such that the center or peak frequency of the modulation in the illumination emitted by the luminous surfaces 107 of the luminaires 100 is at least X times that of the modulation in the light emitted by the separator elements 501 (or vice versa), where X may for example be 1.5, 2 or 3.

Figures 8a and 8b show a simulated example of a uniform humanly- visual appearance of a luminous ceiling element. Figure 8b shows, a simulated example of the same element as 'seen' by the detecting camera 112, revealing to the image processing module 114 the different coded light signals for each of the composing luminaires 100 and revealing the light separators 501 as an easy-to-detect high-frequency spatial pattern. As shown in Figure 8b, note how a higher modulation frequency causes the associated spatial period in the captured camera image, or in each of the images, to shorten and to spread over fewer lines allowing extraction from one or few additional frames. Figure 8a shows the corresponding appearance to a user in the environment 109. Figures 8a and 8b show an example of an embodiment of the invention based on a separator-specific light modulation that is outside any of the luminaire-specific light modulation spectra. As a variant of this, the temporal period of the separators' modulation frequency may be made much shorter than the exposure time of the camera 112, such that the light separators will appear to the camera 112 without any modulation as if the separators were not modulated but driven with constant current. This is illustrated in Figure 9. As shown, the result of using a separator specific light modulation signal with a temporal period that is much shorter than the exposure time of the camera 112 is that fast modulations average out, thus appearing spatially constant in the image (or each of the images) captured by the camera 112. In this case, the image processing module 114 can identify the separators 501 on the basis of this (apparently) unmodulated appearance, as contrasted against the modulated appearance in the image(s) of the illumination from the luminaires 100.

In another variation of this embodiment the separator signal may indeed be made a constant as function of time. In this case the separating elements 501 would also appear as in Figure 9, and thus the image processing module 114 can identify the separation in the same way.

In yet another embodiment, the separator signal may comprise a component in a non-visible part of the electromagnetic spectrum, i.e. in the infrared and/or ultraviolet range. If the camera 112 is able to detect such EM wavelengths, then this provides another way in which the image processing module 114 can see the separators 501 in the captured images 501 while the human viewer cannot see the separators 501 with the naked eye. There are cameras that support IR as well as visible light detection. While these are mainly currently used for security (and, perhaps, vehicular) applications, it is not excluded that a user device such as a smart phone could also be equipped with such a camera, or the techniques disclosed herein could be applied in other use cases, e.g. where the camera is mounted on a robot. UV may provide some sort of advantage over IR in some situations. For example, where luminaires are operated beneath skylights, it could be that the skylight glass blocks external UV to such an extent that operating in UV provides more reliability than visible or IR modes. White LEDs that use UV to excite phosphors might provide the UV without extra LEDs. Robotics or other devices that can support custom camera modules optimized for UV operation may exploit UV even while human users make do with visible light cameras on their smartphones.

Embodiments using IR or UV components may be particularly (but not exclusively) advantageous for machine vision applications, e.g., robots or self-guided shopping trolleys. Using the IR channel for unambiguous boundary detection could be very fast. In any of the above embodiments, a separator signal can also be composed of more than one characteristic component simultaneously, e.g. more than one characteristic modulation frequency component, or a characteristic frequency component in the non-visible range, or a characteristic frequency component in the non- visible range combined with a component in the non-visible range.

Note that the separator signals can be the same for all the separator elements 501 or can be different from one another. In the case where they are different from one another, each has the characteristic of being distinguishable from the illumination from the data-transmitting surfaces 107 of all of the luminaires 100, e.g. based on any of the distinguishing properties discussed above. E.g. each of the separators 501, or each of different subgroups of the separators 501, may be arranged to emit its light with a different modulation frequency or modulation frequency spectrum relative to the other separators 501 or subgroups of separators; in which case, the modulation frequency or spectrum for each distinguishes it from that of the illumination from the data-transmitting surfaces 107 of all of the luminaires 100 (whether the luminaires 100 all emit with the same modulation spectra as one another or different modulation spectra from one another).

In the case where the separator signal is different amongst different ones or different subgroups of the separator elements 501, in some embodiments this may

advantageously be exploited to provide additional information about the environment 109.

For instance, in embodiments, different separator signals (e.g. different modulation frequencies) are chosen for different subgroups of the separators 501 in order to indicate cardinal directions (north-south versus east-west) and thereby enable the image processing module 114 to estimate the orientation of the device 110 without the need for a magnetic compass, based instead on the appearance of the different separator elements 501 in the captured image(s). Alternatively or additionally, different separator signals (e.g. different modulation frequencies) may be chosen for different subgroups of the separators 501 in order to disambiguate among different sections of the lighting system in different parts of the environment 109, e.g. different parts of the building. For instance an application of this can be found in the case where the signal embedded in the illumination from each luminaires 100 comprises a respective ID of the respective luminaire 100 and the image processing module is attempting to decode the ID of one or more of the luminaires 100 from the captured image(s). In this case, then in embodiments the different separator signals (e.g. different modulation frequencies) for different subgroups of the separators 501 can be used by the image processing module 114 so as, when repeatedly using the same luminaire specific identifiers, to minimize the address range of coded light IDs to speed up detection. Consider for example a retail or factory shed containing 10,000 luminaires. One could use a 14-bit ID to uniquely identify each luminaire. This minimizes the probability of address clashes during luminaire and driver installation and replacement, but it also gives cumbersomely long addresses and requires larger signals which take longer to detect. An alternative therefore is to allow addresses to be re-used with disambiguation provided by other, coarser navigation engines. In embodiments, the separator elements 501 can be used to provide such

disambiguation, thus improving detection time per luminaire.

In another embodiment, each composing luminaire 100 of the lighting system has a differently modulated group of separator LEDs (or light emitting-elements) 108, 108' integrated within the composing luminaire 100 itself, such that by concatenation these luminaires 100 (e.g. luminous tiles such as ceiling tiles) make up a lighting system according to embodiments of the invention. Examples of such elements and their concatenation are illustrated in Figure 10. Here the intentional asymmetric arrangement of the different ly- modulated separator LEDs requires the composing elements to be concatenated with the same orientation. The top-left diagram in Figure 10 depicts a bottom-up view of a composing luminaire 100 element with comprising separately driven luminous segments; a segment 107 transmitting coded light messages, another segment 501 transmitting a separator signal. The lower-left diagram in Figure 10 depicts, also from a bottom-up view and side view, the effect when multiple composing luminaires 100 are used in concatenation. The top-right and bottom-right of Figure 10 show a corresponding example of a trunk light. Note that, to the human eye, the transition between luminaires 100 as well as between segments 107, 501 within a luminaire 100 are invisible.

In another embodiment of the invention, the above-mentioned differently encoded groups of LEDs (or light-emitting elements) 108, 108' are arranged as depicted in Figure 6b and 6c, where the light from the neighboring separator LED groups 108' is allowed to mix to compose a second separator signal, which essentially is the sum of the two signals. This embodiment is particularly relevant in case the composing luminaires 100 are placed behind a common light-diffusing surface 105', such as for a luminous ceiling as shown by way of example in Figure 4b. In case the separator-signals of different segments 107, 501 are not synchronized, there is a chance that separator signals of two neighboring separator segments 501a, 501b combine with a different and unknown phase. This may cause the appearance of the separator regions locally to adopt a different appearance depending on the local phase difference. In the case of an opposite phase the net coded-light signal locally adds up as a DC signal. Otherwise, in the case of the same phase, the signals combine to the characteristic modulation signal. In case such variation in appearance of light separations is undesired, adjacent separator elements 501a, 501b can be synchronized. Such

synchronization 601 can be established on the basis of a central reference for all composing luminaires 100 in a lighting system. Otherwise such synchronization could be established locally, between adjacent composing separators 501a, 501b, either by an electrical connection or by locally measuring the light modulation with a light sensitive element. Note that such synchronization may only be confined to the separator signals and not affect the coded light signals transmitting regular messages or luminaire IDs.

In the case of no synchronization, there may be one or two reasons for using this option. One, the combination of two separation areas into one larger separation area makes it easier to detect; and two, the combined signal is clearly an 'inter- luminaire' separator rather than an 'end-of-line' separator. It may also be that the combined separator zone radiates an intentional mix of the two modulated signals, rather than two separator signals, as the combined separator signal. This would ease driver requirements and has the additional advantage that the separation zone only appears if luminaires are directly adjacent to each other. If they are not, the modulated signal is available right up to the edge of the luminaire. In this case, it could be advantageous if the adjacent luminaires are

unsynchronized because the combined separator signal will be an incoherent mess that the camera might readily recognize as 'not modulation'.

It will be appreciated that the above embodiments have been described by way of example only.

An example application of the present invention is in ceiling mounted professional lighting systems, e.g. a planar luminous ceiling solution, or trunk illumination systems for retail environments, where esthetical uniform appearance of the luminaires 100 can be as desirable as the quality of the services such as indoor localization that are enabled by the coded light. As another example application, the invention can be used with luminous textiles as well as with luminous carpets.

For instance, the above has been described in terms of luminaires, but these are not limited to any particular traditional form of luminaire. A luminaire herein is meant to refer to any lighting module comprising at least one fixed or replaceable luminous element and some associated fitting, socket, support and/or housing; with the different luminaires being discrete units or modules, which are capable of being be used individually, but which can also be concatenated together into an array (in embodiments to form an apparently continuous surface or trunk). E.g. in the case of a luminaire in the form of a modular luminous ceiling tile, wall tile or floor tile may, the lamp(s) may comprise one or more LEDs and the support comprises a substrate and any connections for mounting the tiles. In other embodiments, as mentioned, the luminaries may be modular sections of a modular trunk lighting system. In embodiments, a given luminaire 100 contains a single driver and single modulator and therefore emits the same code over a single luminous surface, whilst adjacent luminaires 100 emit different codes and are unsynchronized.

Furthermore, it is also possible that techniques disclosed herein are used not to distinguish between the light from different luminaires of an array of concatenated luminaires, but rather to distinguish between the light from different individual segments within a lamp (a lamp being an indivisible lighting to be plugged into or otherwise connected into the supporting structure of a luminaire, wherein the individual segments cannot be separated). In this case the lamp comprises multiple light-emitting elements 108 (e.g. LEDs) divided into two or more subsets each comprising one or more light-emitting elements 108 (e.g. different groups of LEDs within the lamp).

One motivation for arranging different segments of a lamp to emit different coded light signals, may simply be to increase the amount of data emitted from a given lamp. Another application however is to enable the image processing module 114 to determine the orientation of the camera 1 12 relative to the lamp from an image of that lamp even when only a single lamp is present in the captured image. E.g. if the image processing module 114 is also configured to determine the shape and dimensions of the lamp, it can use image recognition algorithm to determine the distance and angle from which the camera 112 is viewing the lamp. Except the lamp may have a degree of rotational and/or mirror symmetry (e.g. being rectangular or circular), meaning the image recognition algorithm alone cannot disambiguate between two or more possible solutions (e.g. if the lamp is oblong, oval or linear, it will look the same from two different directions; or if it is square it will look the same from four different directions). However, by emitting differently coded light from different sections of the lamp in order to break the symmetry of the shape of the lamp, the image processing module 114 can detect these in different sections in the captured image(s) and thus disambiguate between the different possible views of the lamp. Thus the image processing module 114 is able to determine a distance and angle from which the camera 112 is viewing the lamp, and thus determine a position of the camera 112 relative to the lamp. E.g. given knowledge of the position of the lamp on a map or floor plan, the image processing module 114 can determine the position of the camera on the map or floorplan, e.g. for the purpose of indoor navigation, or providing location-based services. The idea of using coded light segments to break the symmetry of a light source is disclosed in

WO2015/000772. By adding the spatial modulation pattern of the present invention, this advantageously allows the segments to be more readily detected and differentiated by the image processing module 114 whilst still allowing the different segments to remain substantially continuous in appearance to a human viewer (not requiring a substantial non- emitting gap in between).

Generally the techniques disclosed herein can apply to distinguishing between the light from any an array of any type of lighting unit, where "lighting unit" may refer to any luminaire, lamp or other luminous device designed for illuminating an environment.

In further variants of the present invention, while in preferred embodiments the multiple lighting units are regularly spaced in the plane or contour in which they are mounted, this is not necessarily the case in all possible embodiments.

Furthermore, although preferred, it is not necessary in all possible embodiments to have need to have the luminous separator element 501 extending along the entirety of the boundary between luminaires or lighting units 100 - the minimum is only to be able to break up the otherwise continuous appearance of the luminaires or lighting units 100 to a distinguishable degree, not necessarily to separately them completely. If the separation is sufficiently large, this this may be sufficient to enable detection by the decoder 114 from an image or images of that separation captured by the camera 112. In embodiments, e.g. for a luminous ceiling composed of square luminous ceiling tiles, the separator elements 501 may even take the form of dots in the in two opposing corners of each square. This allows the decoder 114 to distinguish between the different tiles 100 by recognizing the separator dots as corners of the tiles 100 in the captured image(s) (analogous to a "connect the dots" puzzle).

Further considerations as to example implementations are now discussed in relation to Figures 11 and 12.

The parts 107 of the luminaire 100 that emit the same coded light identifier are typically connected to the same driver. Embodiments above have described the use of another type of modulation at the boundaries of the luminaire 100, which can be detected to separate the coded light luminous surfaces from each other. However, this requires a second modulation source in the driver. It also introduces possible crosstalk between the modulation in the adjacent separators and between the separator and the main coded light part. Another issue is that finding the separator modulation areas requires an extra detector processing step. Other possible solutions can be implemented by altering a specific modulation property per luminaire, for example modulation depth or transmission frequency. This, however, requires a commissioning step after installation, to create the desired arrangement, which may not be desirable and is only possible for connected lighting. The commissioning step could be prevented by using preconfigured luminaires and mounting them in a predefined structured way, but also this is not very practical.

Another solution can be to use modulated and non-modulated luminaires in an interleaved way. From an indoor positioning perspective the solution of using non-modulated luminous elements has the disadvantage that large parts of the lighting system do not contribute to the accuracy, because they cannot be identified. Also, the commissioning required for this again adds complexity.

Embodiments herein therefore provide an alternative implementation which can enable separation without adding much complexity to the transmitter, without commissioning, without dedicated receiver algorithms for this specific purpose, and preserving the indoor positioning capabilities of the lighting system.

This is achieved by creating areas within the luminaire 100 without coded light modulation, while keeping the brightness level of the luminous surface the same. Using a detector 114 that can perform segmentation based on the presence of coded light modulation, the luminous areas 107 can be separated from each other. The separator areas 501 are detected for the segmentation but not decoding of any signal from themselves per se.

Often a luminaire is built with a number of LED boards. By connecting the LED boards at the edge of the luminaire 100 to a non-modulated current source, and the boards in the middle part of the luminaire 100 to the modulated current source, it is possible to get an arrangement as has been described above (e.g. see Figures 5, 8a, 9 and 10) wherein when the luminaires 100 are placed next to each other they form a continuous light line or surface. As has been discussed, in embodiments the separators 501 may be implemented as areas without modulation.

After segmentation based on the presence of modulated and non-modulated areas 107, 501, done in the camera detector 114, it is possible to identify the different areas in an image. The gaps formed by the non-modulated areas 501 simplifies the separation of the data-transmitting luminous surfaces 107 of the different luminaires 100. Figure 5 shows a camera image of a continuous light line, and Figure 11 shows how such an image might appear after segmentation based on the presence of modulation. The same concept can be used for large luminous surfaces constructed from tiles with different drivers and coded light information. See again Figures 8a and 9. In this case the non-modulated areas are placed at the horizontal and vertical sides of every tile.

The non-modulated and modulated current sources can be implemented by two separate drivers, but it can also be done with one driver. The coded light injector can be integrated in the driver, used separately or integrated on the LED board. Whichever way is used, both a modulated and a non-modulated current are made available from a single driver, which will require some extra wiring but will not be as complex as adding a separate driver.

Figure 12 shows a possible implementation to create modulated and non- modulated areas using one driver 106. The driver 106 comprises a current source 1202 and a coded light injector 1204. The current source 1202 outputs a constant current for a given dim level, and this unmodulated current is supplied along one branch directly to a first set of one or more LED boards 1206 within a luminaire 100 which form the luminous separator element(s) 501. The current from the current source 1202 is also supplied along a second branch to the coded light injector 1204, which modulates the data from the encoder 104 into this branch of the current to provide the modulated current. The modulated current is supplied to a second set of one or more LED boards 1208 within the same luminaire as the first set 1206, wherein this second set forms the data-transmitting luminous area 107.

The LED boards can be arranged as shown in Figure 13, with the non- modulated boards at the edge of the luminaire to create the non-modulated areas 501.

Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. A single processor or other unit may fulfil the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. A computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems. Any reference signs in the claims should not be construed as limiting the scope.