Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD FOR IDENTIFYING AND LOCATING A MOVABLE OBJECT
Document Type and Number:
WIPO Patent Application WO/2018/002679
Kind Code:
A1
Abstract:
In a method for identifying and locating moving or movable objects 1 solely optical signals are used wherein optical transmitters 2 emit unique optical signals by interrupting the optical signal (i.e. flashing) to send a bit sequence to the receivers 3. The identifier encoded in the bit sequence represents the transmitter's 2 ID. The signals are received using optical receivers 3, so that the signals appear on at least one pixel of the CCD, CMOS etc. matrix sensor 8 of the of the receiver 3. The transmitter's 2 location, and hence the object's 1 location, can be determined using a mathematical method based on the lit pixels. The signal processing rate is set higher than the speed at which the transmitters 2 move, so that transmitter 2 displacement can be tracked using a mathematical method and the objects 1 can be identified on the move. As a novel feature of the present invention the transmitter 2 - receiver 3 pair implements locating and identification using a single technology. The transmitters 2 can be mounted on the moving objects 1, with the receivers 3 fixed; or alternatively, the movement of several objects 1 can be controlled independently from each-other by mounting the receivers 3 on the objects 1 and fixing the transmitters 2.

Inventors:
MARCZY LÁSZLÓ (HU)
Application Number:
PCT/HU2017/050025
Publication Date:
January 04, 2018
Filing Date:
June 29, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MAGICOM KFT (HU)
International Classes:
G01B11/00
Foreign References:
US8892252B12014-11-18
US6324296B12001-11-27
US20120075878A12012-03-29
Other References:
See also references of EP 3479055A4
Attorney, Agent or Firm:
ANTALFFY-ZSÍROS, András (HU)
Download PDF:
Claims:
Claims

1. Method for identifying and locating movable objects (1) in a defined space comprising the steps of arranging at least one transmitter (2) and at least one receiver (3) in the defined space by mounting one of the transmitter (2) and receiver (3) on the movable object (1) and the other one of the transmitter (2) and receiver (3) in the environment of said movable object

(1) in a fixed manner, emitting, by the transmitter (2), signals assigned to the object (1) uniquely identifying said object (1), - receiving in a wireless manner, by the receiver (3), the signals emitted by the transmitter

(2) ,

determining from the received signals at least one element of a group comprising of: identity the movable object (1); location, position, direction of displacement, velocity of displacement of the transmitter (2) and receiver (3) relative to each other; wherein said determination comprises signal processing, and providing at least one element of said group as a result, characterized by comprising the further steps of using solely optical signals as signals emitted by the transmitter (2) and assigned to the object (1), - emitting said optical signals as a bit sequence consisting of bits in an unoriented manner, receiving and buffering the emitted signals in receiving cycles by the receiver (3), wherein receiving the emitted signals in receiving cycles by the receiver (3) comprises capturing the emitted optical signals during a receiving cycle by a CCD, CMOS or other pixel- based imaging sensor (8) of the receiver (3), reading the data represented by the bits from the imaging sensor (8) pixel by pixel, and a) in case of a stationary object (1), identifying the transmitter (2) emitting the bit sequence based on evaluation of a bit sequence captured by a single pixel of the optical sensor (8), b) in case a moving object (1): designating a pixel area of the optical sensor (8) including a pixel capturing an initial bit of the bit sequence; determining whether one or more subsequent bits of the bit sequence emitted by the transmitter (2) can be captured by the same pixel of the sensor (8) of the receiver (3) or can be captured by another pixel of the sensor (8) located toward an edge of said pixel area; and if the one or more subsequent bits in the same bit sequence can be captured by a pixel located toward an edge of the pixel window, re-designating said pixel area of the optical sensor (8) so that the subsequent bits fall on a pixel in the centre region of said pixel area as much as possible, but at least, they should fall within said pixel area, so they can be captured by a pixel within said pixel area; determining for each optical signal emitted by the transmitters (2) and received by the optical sensor (8) whether the received and buffered bits comprise a full bit sequence that is necessary and sufficient for identification, and identifying the object (1), if the received and buffered bits comprise a full bit sequence that is necessary and sufficient for identification, while if the received and buffered bits do not comprise a full bit sequence that is necessary and sufficient for identification, continuing receiving the optical signals of the transmitters (2) by the optical sensor (8) until they make up a full bit sequence, and identifying the object (1) after acquiring the full bit sequence. 2. Method as claimed in claim 1, wherein each bit sequence comprises at least one trigger bit for marking the beginning of the bit sequence for indicating unambiguously the beginning of a new bit sequence to a signal processing unit (4) connected to the receiver (3).

3. Method as claimed in claim 2, comprising starting each bit sequence with three bits of zero value constituting said trigger bits. 4. Method as claimed in any of claims 1-3, further comprising associating an area of the defined space monitored by the sensor (8) of the receiver (3) with a matrix wherein the rows and columns of said matrix correspond to 2D points in the space monitored by the sensor (8) of the receiver (3); assigning the individual identified transmitters (2) to this matrix, and providing this matrix for further processing in an output buffer of the signal processing unit (4).

Description:
Method for identifying and locating a movable object

The present invention relates to a method for identifying and locating one or more movable objects in accordance with preamble of claim 1, in particular a method comprising the steps of arranging at least one transmitter and at least one receiver in the defined space by mounting one of the transmitter and receiver on the movable object and the other one of the transmitter and receiver in the environment of said movable object in a fixed manner, emitting, by the transmitter, signals assigned to the object uniquely identifying said object, receiving in a wireless manner, by the receiver, the signals emitted by the transmitter, determining from the received signals at least one element of a group comprising of: identity the movable object, location, position, direction of displacement, velocity of displacement of the transmitter and receiver relative to each other; wherein said determination comprises signal processing, and providing at least one element of said group as a result.

There are a number of known systems for identifying and locating persons and/or objects in real time. They can identify and locate tracked objects in real time on a relative basis within buildings or indoors. Most of these solutions rely on wireless transmitters mounted on the objects to track, and fixed wireless receivers to receive the transmitters' signals.

The real time locating systems known to us are restricted in application by a number of factors, which are often inherent in the technology used. Some of their shortcomings: objects cannot be located precisely enough; an insufficient number of objects can be tracked at the same time; different technologies must be used, one for identifying and another for locating the objects; the greatest permissible distance between the transmitter and the receiver is too small; transmitters are too bulky. They also have other disadvantages, e.g. they cause interference by using radio frequency waves; and they involve a complex architecture that is hard to manage and maintain, which in part explains why they have high failure rates, high energy consumption levels, and high operating costs.

GB 2 475 077 Al describes a locating system wherein laser beams scan the entire space in order to locate objects. This solution has major drawbacks, i.e. laser beams cannot be used in all settings; the approach is costly; and it does not identify the objects at all.

JP 2002 165230 A describes a method for determining the distance between objects and a receiver wherein the transmitter and the receiver are interconnected and the object is not identified.

CN 1710378 A discloses a solution for determining the location of the centre of a light beam that falls on the receiver and is unfit either for determining the distance of an object or for identifying it.

Several motion tracking systems are known, which use different unique visual patterns as transmitters, and cameras as receivers to follow the patterns. The different unique visual patterns also serve as the identification code of an object. Another type of motion tracking systems use visual patterns generated by using several light sources, mostly LEDs in different geometrical arrangements, e.g. in a matrix form. In such cases no light pulses shall be used to generate a unique ID code of an object.

There are known motion tracking systems using light pulses to identify unique objects. In these systems synchronization between the transmitters and the receivers is indispensable to be able to obtain the identification code of the objects.

Combination of the two mentioned different type of motion tracking solutions would result in a solution where the transmitters e.g. LEDs arranged in matrix form are lit to shape different visual patterns.

US 8,892,252 Bl proposes, on the one hand, a method for scanning across a surface of a part within a capture volume, comprising scanning the surface of the part using an inspection unit, acquiring measurement data representing one or more motion characteristics of the inspection unit using a motion capture system operatively disposed with respect to the capture volume, the one or more motion characteristics being measured using a plurality of retro-reflective markers attached to the inspection unit in a known pattern, deriving position data and orientation data from the measurement data, said position data and orientation data representing positions and orientations of the inspection unit in a coordinate system of the part being scanned, acquiring inspection data, and combining the position data with the inspection data.

US 8,892,252 Bl proposes an additional method comprising moving an inspection unit along a desired path on the surface of the part, measuring positions of retro-reflective markers attached to the inspection unit with respect to a motion capture coordinate system, converting measurements of the positions of the retro-reflective markers with respect to the motion capture coordinate system into first position data and orientation data representing the positions and orientations of the inspection unit with respect to the coordinate system of the part, encoding the first position data into simulated encoder pulses which indicate the positions of the inspection unit with respect to the coordinate system of the part, acquiring inspection data during movement of the inspection unit along the desired path, sending the simulated encoder pulses and the acquired inspection data to a processor, decoding the simulated encoder pulses into second position data representing positions of the inspection unit with respect to the coordinate system of the part, associating the second position data with the inspection data, and displaying the inspection data in accordance with the associations made.

This document does not show or propose a solution, which could be used for eliminating the problems outlined above and can be hardly implemented in today's automatized manufacturing systems.

US 6,324,296 Bl describes a motion capture method for tracking individually modulated light points, comprising imaging a plurality of light point devices attached to an object to be tracked in a motion capture environment, each being operable to provide a unique plural bit digital identity (ID) of the light point device, capturing a sequence of images of said pulses corresponding to substantially all of said plurality of light point devices; and recognizing the identities of, and tracking the positions of, substantially all of said plurality of light point devices based upon said light pulses appearing within said sequence of images, respectively. In order to achieve high recognizing and tracking rates, parallel processing of the data captured by the at least two imaging means is used. It is disadvantageous that depending on the motion capture environment, high-resolution imaging means and high processing capability is needed for accurate capturing and identifying of each of the individually modulated light points. None of the technical solutions listed above is capable of both locating and identifying objects using the same technology. Other known technologies, such as FID, are either incapable of locating objects precisely enough, or the devices have to be so close to each-other for high precision that their small range seriously hampers usability/applicability. Yet another disadvantage is that several known real time locating systems cause radio frequency interference and/or are sensitive to radio frequency noise, which reduces their precision and reliability. Yet another disadvantage is that the known real-time location and/or tracking systems always need synchronisation between transmitters and receivers to be able to obtain the unique identification code of the transmitters.

In the present invention use of only one light transmitter for one tracked object is sufficient. Due to the method, no synchronization is needed between the transmitters and the receivers.

It is the object of the present invention to provide a method which enables implementing a real time locating and identification system/solution that tracks a large number of objects at the same time with high precision and reliability, i.e. without errors, is inexpensive to produce, is not labour-intensive to install, operate and maintain, can be modified subsequently, and also offers scalability.

Our invention is based on the recognition that a bit sequence usable for a unique identification of selected objects and an algorithm suitable for the identification can also be combined with a locating algorithm using sensors arranged in a matrix, thus the same optical signal can be used both to identify and to locate the object.

The invention relates to a method according to claim 1 for identifying and locating movable objects in a defined space, comprising the steps of arranging at least one transmitter and at least one receiver in the defined space by mounting one of the transmitter and receiver on the movable object and the other one of the transmitter and receiver in the environment of said movable object in a fixed manner, emitting, by the transmitter, signals assigned to the object uniquely identifying said object, receiving in a wireless manner, by the receiver, the signals emitted by the transmitter, determining from the received signals at least one element of a group comprising of: identity the movable object; location, position, direction of displacement, velocity of displacement of the transmitter and receiver relative to each other; wherein said determination comprises signal processing, and providing at least one element of said group as a result. According to the invention solely optical signals as signals emitted by the transmitter and assigned to the object are used, said optical signals are emitted as a bit sequence consisting of bits in an unoriented manner, the emitted signals are received and buffered in receiving cycles by the receiver, during which the emitted optical signals are captured during a receiving cycle by a CCD, CMOS or other pixel-based imaging sensor of the receiver, and the data represented by the bits are read from the imaging sensor pixel by pixel. In case of a stationary object, the transmitter emitting the bit sequence is identified based on evaluation of a bit sequence captured by a single pixel of the optical sensor, while in case a moving object a pixel area of the optical sensor including a pixel capturing an initial bit of the bit sequence will be designated, and it will be determined whether one or more subsequent bits of the bit sequence emitted by the transmitter can be captured by the same pixel of the sensor of the receiver or can be captured by another pixel of the sensor located toward an edge of said pixel area. If the one or more subsequent bits in the same bit sequence can be captured by a pixel located toward an edge of the pixel window, said pixel area of the optical sensor shall be redesignated so that the subsequent bits fall on a pixel in the centre region of said pixel area as much as possible, but at least, they should fall within said pixel area, so they can be captured by a pixel within said pixel area. Subsequently it will be determined for each optical signal emitted by the transmitters and received by the optical sensor whether the received and buffered bits comprise a full bit sequence that is necessary and sufficient for identification, and if the received and buffered bits comprise a full bit sequence that is necessary and sufficient for identification, the object will be identified, while if the received and buffered bits do not comprise a full bit sequence that is necessary and sufficient for identification, receiving the optical signals of the transmitters by the optical sensor will be continued until they make up a full bit sequence, and after acquiring the full bit sequence the object can and will be identified.

Dependent claims set forth preferred embodiments. Preferably, each bit sequence comprises at least one trigger bit for marking the beginning of the bit sequence for indicating unambiguously the beginning of a new bit sequence to a signal processing unit connected to the receiver. In this case each bit sequence can be preferably started with three bits of zero value constituting said trigger bits.

According to a further preferred embodiment an area of the defined space monitored by the sensor of the receiver is associated with a matrix wherein the rows and columns of said matrix correspond to 2D points in the space monitored by the sensor of the receiver and the individual identified transmitters are assigned to this matrix that is provided for further processing in an output buffer of the signal processing unit.

Further characteristics and advantages will be clearer from the detailed description of an exemplifying but non-limiting embodiment of a method for identifying and locating one or more movable objects, in accordance with the present invention. Such description will be set forth below with reference to the enclosed figures, provided only for exemplifying and hence nonlimiting purposes, in which:

Figures la-lc show, in functional block terms, alternative scenarios for assigning transmitters and receivers to the objects;

Figure 2 shows an exemplary block diagram of a configuration implementing the method according to the invention;

Figures 3a-3c show an outline of the pixel image of the light beam of the transmitter on the imaging sensor of the receiver; and Figures 4a-4b show a flowchart of an implementation of the method in accordance with the present invention. An exemplary implementation of the method according to the invention for identifying and locating movable objects within a specific area is explained in detail below.

The exemplary implementation of the method takes place in a defined and enclosed space, such as a warehouse, a shop floor or a similar site. A tracked object 1 can essentially be any object, e.g. one already stored or to be stored in the warehouse, one that is processed on the shop floor, or even a receptacle, vessel or crate holding the objects to track; and the object 1 can be either stationary or moving, as we shall see. Objects 1 may also be persons or animals, though this requires further consideration.

Similarly to currently known technologies, the identification of each object 1 requires at least a transmitter 2, a receiver 3, and a signal processing unit 4 connected to the latter.

The process can be implemented by mounting the transmitters 2 on the objects 1 and fixing the receivers 3 in the indoor space where the objects 1 are, as shown in Figure la, but also alternatively, by mounting the receivers 3 on the objects 1 and fixing the transmitters 2, as shown in Figure lb. The two scenarios can also be combined, i.e. each object 1 is assigned a transmitter 2 and a receiver 3, and also, transmitters 2 and receivers 3 are fixed in the space where the objects 1 are, as shown in Figure lc.

A key feature of the method according to the invention is that all communications between the transmitter 2 and receiver 3 rely solely on light, which obviously means that ambient light should not interfere with the light used for communication purposes, and that the latter itself should not disturb, or interfere with, the objects or persons in the space. To that end, the transmitters 2 use infrared light, because it does not disturb people in the premises and offers some interference protection; and the light of the transmitter is received by at least one receiver 3.

The method according to the invention can thus be implemented under ordinary lighting conditions, and it should be noted, that though for sake of simplicity, the exemplary implementation includes a single transmitter 2, it is both possible and advantageous to have a number of transmitters 2 in the defined space, each with its own unique identification, ID.

As outlined in Figure 2, the transmitter 2 of the exemplary implementation obviously contains an energy source 5, such as a battery or a rechargeable battery, a signal generator stage 6 and a light emitting diode, LED, 7. The exemplary implementation includes an ATMEGA88PA microcontroller as the signal generator stage 6, which drives a 950 nm wide-angle undirected I LED 7, e.g. SFH4240, through a serial resistor Rl. The transmitter 2 has an integrated energy source 5, i.e. a rechargeable 3V battery in the exemplary configuration. Depending on the purpose and circumstances of application, the rechargeable battery can actually be a known solar cell charger unit plus a battery, or power can simply be supplied by a non-rechargeable battery. The microcontroller acting as a signal generator stage 6 ensures that the light signals of the LED 7 are unique to the transmitter 2 and thus uniquely identify the object 1 to which the transmitter 2 is attached. The unique signals in the exemplary configuration are flashes of light that make up a bit sequence ID, with the microcontroller flashing the LED 7. The ID sequence in this exemplary consists of 10 bits. The position and location, resp. of the tracked object 1 within the space, e.g. on the shop floor, can be read from the frames recorded by the appropriately mounted and positioned receiver 3. In the exemplary implementation, the matrix sensor 8 of the receiver 3 is a CCD IP camera, such as Aircam by Ubiquity Networks, with a resolution of 1024x768 pixels and a refresh rate of 30 fps (frames per second). The matrix sensor 8 is symbolised by a square grid in the drawing. The factory I filter of the camera in front of the matrix sensor 8 was replaced with a colour filter that has a very steep pass-through curve, which only allows the 950 nm signal of the transmitter 2 to pass. The digital output of the camera of the receiver 3, i.e. the frames captured by the sensor, is transmitted to a buffer stage combined with a signal processing unit, to be presented later on. The frequency of the bit sequence emitted by the transmitter 2 matches the frame reading frequency of the sensor, i.e. the CCD of the sensor receiver 3 in the present case. This means that in a 10-bit sequence, the duration of one bit is 1/30 s.

In order for the exemplary implementation to function properly, the transmitter 2 and the receiver 3 must have an unimpeded optical connection, i.e. they must be visible to each-other. Should this not be feasible, the optical connection must be replaced by some other wireless connection which, though it should not be a problem for a person skilled in the art, is outside the scope of the present invention.

As should be obvious to a person skilled in the art, the CCD sensor of the transmitter 2 can also be replaced with a CMOS sensor, but this requires additional technical arrangements to function properly and without errors.

By default, the transmitter 2 can transmit its unique bit sequence ID on an ongoing basis. The term "ongoing" in this case means repeating the 10-bit sequence with an interval that allows the receiver 3 and the connected signal processing unit 4 to process the signal of the transmitter 2 successfully and to identify the transmitter 2.

An energy-saving alternative is to have the transmitter 2 transmit the bit sequence ID non- continuously, and linking the transmission mode using a known solution to the state of the object 1, i.e. moving or stationary. When the object 1 is moving, the transmitter 2 keeps transmitting the ID, as the receiver 3 needs current bit sequences in order to locate the transmitter 2. When the transmitter 2 is stationary, ongoing transmission is unnecessary, it suffices for the transmitter 2 to transmit the ID every now and then, e.g. every 10-30 seconds, partly to signal its location, and partly to confirm that it is still operational. This mode of operation can be altered by a person skilled in the art by adding a motion detector to the transmitter 2 and using the motion detector's output to vary the transmission schedule of the transmitter 2.

In the method shown exemplary in the flowchart in Figures 4a and 4b the bit sequence of the transmitter 2 is received by a receiver 3. Lights of the LEDs 7 of the individual transmitters 2 fall on a single pixel or a group of pixels in the sensors 8 of the type of CCD or CMOS, which have visual contact with the transmitter 2. If the beam falls on several pixels, the centre pixel is selected using a known mathematical averaging method. The location of said pixel or pixel group in the matrix depends on where the transmitter 2 is in relation to the receiver 3, so the position of the transmitter 2 can be determined mathematically, i.e. using the known triangulation method, based on the X and Y coordinates of a single pixel or the centre of a pixel group in the frames read by the sensor 8 of the receiver 3. This means in the present case that we check whether any change has been induced by the bit sequence output by the transmitter 2 in any single pixel of the 1024x768 pixels of the camera sensor of the receiver 3, using the refresh frequency of the sensor. This is outlined schematically in Figures 3a-3c, wherein the single active signals of the 10-bit sequences of the transmitter 2, i.e., when the LED 7 emits light, appear on a pixel or a number of contiguous pixels of the CCD sensor.

As the next step in the method shown, a signal processing unit 4 connected to the receiver 3 first determines whether the bit sequence transmitted by the transmitter 2 is a full and integral sequence that allows the receiver 3 to identify the transmitter 2. To that end, the receiver 3 receives the signal in so-called reception cycles, wherein each cycle corresponds to the duration the bit sequence transmitted by the transmitter 2. _ g _

In each reception cycle, we test the bit sequence received on a given pixel, and first ascertain whether the signals received in that cycle constitute a full sequence that allows for the clear and unique identification of the transmitter 2.

Since the sensor 8 of the receiver 3, i.e. the camera, is fixed and is looking in a known direction, the signal received on a given pixel or pixel group of the CCD sensor allows for determining where the bit sequence is coming from, so the position or location of the transmitter 2 can be computed simply and precisely, using the known triangulation method. This ensures that the transmitter 2 is identified and its location is determined, too.

The above statements apply to the case when the object 1 to identify is not moving. In case the object 1 is moving, the transmitter 2 moves with it, so the location of the light emitted by it varies in the space, and this is detected by the receiver's 3 matrix sensor 8. To make sure the sensor 8 of the receiver 3 is receiving the signals of the same moving transmitter 2, a target pixel is designated at the beginning of each reception cycle, or a centre pixel is designated if the sensor 8 receives a signal that covers a pixel group. In each frame, we check whether the target pixel has stayed in its original place or whether it has moved on in some direction, to an adjacent or nearby pixel. If, based on the reception of single frames, it is determined that the target pixel has changed place since the start of the reception cycle, an auxiliary algorithm is used to make sure the target pixel is, to the extent possible, always at the centre of a predefined area that is smaller than the area of the sensor 8, so we don't lose sight of the moving target mid-cycle, and so that we can determine the ID of the transmitter 2. If it is determined that the target pixel is moving, it is tracked using an observation field that comprises additional pixels surrounding the target pixel and is shifted in the direction the target pixel is moving on the sensor 8.

Another version of the method according to the invention allows for determining not only the location of the transmitter 2 but also its velocity, based on the distance the target pixel travels during a reception cycle and the size of the space observed.

If there are several tracked objects 1 and each is associated with a transmitter 2, the signals of the transmitters 2 may be mixed in theory and also in practice by the time they reach the sensor of the receiver 3. This is, however, not a problem for the proposed process, because the integrity and validity of the sequence will be checked in each cycle and should this test fail, the sequence is ignored for that cycle, until full sequences are received again from the transmitters 2 assigned to the objects 1. The transmitters 2 assigned to the individual objects 1 are not synchronised with each-other, i.e. there is no mechanism to ensure that each transmitter 2 starts transmitting its own ID sequence at the same time. Therefore, transmitters 2 can also be identified by storing each frame temporarily, then arranging the frame pixel values, for instance, in a matrix, to check which sequence the bit on each element of the matrix belongs to, and also whether it is the initial, interim or final bit of the sequence. If it is the final bit in a sequence, the transmitter 2 can be identified by evaluating the 10 last stored bits, i.e. the length of the sequence. If the bit turns out to be other than the last bit of a sequence, we will wait until the receiver's 3 frames yield a full sequence, and then identify the related transmitter 2 on that basis. In order to simplify the integrity test of the transmitters' 2 sequences, the method according to the invention may use special bits to mark clearly the beginning of a new sequence for the signal processing unit 4 connected to the receiver 3. The initial marker bits can take the form of three bits with zero value, for example, i.e. the transmitter 2 does not emit any light pulses for the duration of these bits. As should be obvious to a person skilled in the art, this acts as a trigger that clearly marks the beginning of a new bit sequence.

Another optional scenario is to associate the area surveyed by the sensor 8 of the receiver 3 with a matrix whose lines and columns correspond to 2D points in the space surveyed by the sensor 8 of the receiver 3, to identify and locate the individual transmitters 2 in this matrix, and to make this output matrix available in the output buffer of the signal processing unit 4 for any further processing.

In the implementation shown, the signal processing unit 4 is a computer module that processes the bit streams coming from the receivers 3. The signal in the exemplary implementation is an MPEG4 stream due to the specific camera, but other formats can also be used. Based on the data from the receivers 3, the central signal processing unit 4 determines the exact location and ID of each transmitter 2 and offers connectivity to other systems, such as industrial process control or management systems.

As already mentioned, the tracked objects 1 are identified based on the unique light signals of the transmitters 2. The flashing LED light represents binary signals, e.g. an active light-on condition stands for logical 1, and a light-off condition stands for logical 0. The bit sequences of the transmitters 2 are different from each-other, they uniquely identify each transmitter 2 and thus each object 1. The signal rate of the transmitter 2, i.e. the transmission and repeat rate of individual bit sequences, must be the same as the sampling rate of the sensor 8 of the receiver 3, and they must also be synchronised for proper reception. This can be ensured by setting the signal generator stage 6 of the transmitter 2 to the known sampling rate of the sensor 8 of the receiver 3. The position of the transmitter 2 can be read from the frame captured by the sensor 8 of the receiver 3 and the signals in several consecutive frames make up the bit sequence that serves as the ID. In the exemplary implementation, the transmitters 2 emitting infrared bit sequences are mounted on the objects 1, and the goal is to determine their precise location at all times in a defined indoor space. The CCD sensors 8 of the receiver 3 receive the I signals of the transmitters 2 through a lens 9 and a colour filter 10. Objects 1 are located and identified on the basis of the signals received. A possible identification implementation is shown in Figures 3a-3c. The light from the transmitter 2 mounted on the object 1 falls on the pixel matrix of the CCD sensor 8 of the receiver 3, symbolised by black dots in the Figures. The ID of the object 1 - based on three exemplary frames and the corresponding points of light read from the lower left corner of the CCD sensor 8 - is 1-0-1. The ID of another object 1 - based on the points of light in the frames in the top right corner of the CCD sensor 8 - is 1-1-0.

Every single pixel of each frame is processed by an appropriate software utility in the signal processing unit 4. The pixels that correspond to the signal from the transmitters 2 visible to the sensor 8of the receiver 3 are recognised. The ID is the bit sequence that consists of light (binary 1) and dark (binary 0) pixels at the same position in consecutive frames. Each transmitter 2 sends a unique ID, i.e. a unique bit sequence converted into optical signals. According to the method, alternation of dark and light pixels in the consecutive frames will be found and detected. If the alternating dark/light pixels show the same pattern as the preprogrammed flash sequence of any of the transmitters 2, the match is recognised and the transmitter 2 is identified.

Obviously, no two transmitters 2 send the same bit sequence within one installation, i.e. each transmitter 2 emits its unique flash sequence. Since the flash sequence of each transmitter 2 is known, each transmitter 2 can be identified unambiguously. The present implementation uses 10-bit IDs, which enables up to 1023 transmitters 2 in the system. This number is sufficient for covering large indoor areas.

The interval between bit sequences is at least as long as the time required for identification. The signal can thus be precisely distinguished from optical noise and other interference. Beside identification, the invention is also aimed at determining the current location of individual objects when certain conditions - e.g. duplicated transmitters - are available. This requires the mounting position of each receiver 3 to be recorded precisely in the signal processing unit 4. Position data are then used for computing the location with the mathematical triangulation method. Light falls on a specific X, Y pixel of the sensor 8 of the receiver 3. The X and Y pixel parameters determine the angle at which the transmitter 2 is visible from the perspective of the receiver 3. Distance data combined with angles of visibility clearly determine the location of the transmitter 2 on a 2D plane. The location is determined using the known triangulation method. The use of two receivers 3 also enables locating in 3D where the transmitter 2 is located using the triangulation method in this scenario as well.

Figures 4a-4b show a flowchart of an exemplary method of implementation of the signal processing in accordance with the present invention.

The size of the frame captured for processing by the camera of the receiver 3 is Xpic, Ypic. Each variable is set to the default value of 0 at the start of the operation. Operation of the exemplary method starts in step 102, where image input data 101 of individual frames are stored pixel by pixel in a BufferO. A light pixel represents a logical one, a dark pixel represents a logical zero in the process presented. Operation proceeds from start step 102 to step 103, where it is tested whether the image input data of a frame has been received in full. If not, the process returns to step 102, for reading more image data and storing said data in the BufferO. When it is defined in step 103 that image data of a full frame is stored in BufferO, the pixels are digitised i.e. converted into logical 0/1 (dark/light signals) in step 104 and are used in this digital form for further processing in step 105. The digitised dark/light signals of the frame are then stored in Bufferl. The evaluation of individual pixels starts with the pixel in row 0 of column 0 in the matrix coming from the sensor 8 and proceeds column by column, then moves on to the next row. The process continues with each and every pixel of the frame until the last column of the last line in the matrix is reached. Pixel evaluation yields an X, Y coordinate pair in BufferO, provided that the sensor 8 of the receiver 3 detected and captured the signal from the transmitter 2. Then the bit value with the mentioned coordinates in BufferO is compared with the bit having the same coordinates in Bufferl.

If in step 106 it is determined that the bit with the specific coordinates in BufferO is identical with the bit at the same position in Bufferl, and if in step 107 it is determined that a code sequence is already being compiled for that pixel, the bit stored in BufferO is added to the code sequence in step 108. If in step 106 it is determined that the bit in BufferO is not identical with the bit in Bufferl and if in step 109 it is determined that no code sequence is being compiled for that pixel either, in step 110 the bit in Bufferl is regarded as the initial bit and a new code sequence with it is started. After comparing the BufferO bit with the Bufferl bit having the same X, Y coordinates, in step 111 it is determined whether the code sequence is complete, i.e. whether it has reached the full predefined length in number of bits. If not, the next frame(s) is/are read and processed. When the sequence has reached the predefined length, in step 112 a time stamp is added to it and the sequence thus completed, i.e. the code and time stamp for the pixel at the X and Y coordinates, is released for subsequent processing.

The sequence is also checked in Step 113 for CRC/checksum before release and if the test returns an error, the code sequence is discarded in step 114. In this case, ID and location of the transmitter 2 are unknown for a short period, i.e. a few seconds, as the corresponding code has been discarded. If the CRC/checksum result is correct, the X, Y coordinates serve as input for locating the transmitter 2, while the code sequence represents its ID.

If it is determined in step 107 that no sequence has been started for the specific pixel, assembling the code is continued in step 115. In step 116 it is checked whether the end of the pixel row has been reached, and if so, in step 117 reading the next row is started; if not, the process returns to step 106 and the operation is repeated from there. Then in step 118 it is checked whether the end of a pixel column has been reached. If not, the process returns to step 106 and the operation is repeated from there; if yes, then after analysing the pixels of a frame in BufferO, in step 119 Bufferl is flushed, frame data are copied to Bufferl, and then BufferO is flushed to make room for the next frame. As should be obvious to a person skilled in the art, any beam that can be focused on the receiver 3 side can be used for locating and identification purposes. If optical transmitters 2 are used, their signal generator stage 6 must be capable of controlling the transmitter 2. Though the exemplary implementation uses the IR range, which is recommended indoors, the process can also be implemented using signals of a different wavelength. The receiver 3 must be capable of detecting, focusing and digitising the optical/light signals from the transmitters 2 and of passing them on to the signal processing unit 4.

Another optional implementation may include multiple transmitters 2 operating on different wavelengths. In this case, the receivers 3 receive the signals through colour filters 10 of the appropriate wavelengths. For example, if the transmitters 2 use three different infrared wavelengths, the RGB colour filters 10 of the receivers 3 are replaced with three different IR filters. This speeds up processing, because three bits of information can be encoded in a single frame as opposed to just one bit, i.e. data density is tripled. The sensor 8 of the receiver 3 used in this implementation has a resolution of 1024x768 pixels, but the resolution can obviously be lower or higher as well. As should be obvious to a person skilled in the art, resolution determines locating precision.

The sensor 8 of the receiver 3 can be that of any digital camera wherein the sensor pixels are arranged so as to allow for locating the transmitter 2 of the light signals using a known algorithm.

As mentioned earlier, the interval between the bit sequences of the transmitters 2 is set to equal to, or greater than, the time required for identification. This process can be enhanced with optional error-checking algorithms, such as C C or parity check, in order to distinguish the genuine signal better from optical noise and interference. Obviously, the signal processing rate is set higher than the speed at which the objects 1 and the transmitters 2 mounted on them move, so that transmitter 2 displacement can be tracked using a mathematical method and objects can be identified on the move.

As for the transmission of digital data, any method or system can be used, the method according to the invention raises no special requirements.

For example, the process can be implemented in a system wherein the receivers 3 contain a microcomputer 3a. Components may include, for example, a Raspberry PI3 module with 4 GB of RAM, an SD card as buffer, and an IP camera serving as the sensor 8 of the receiver 3, with its CSI port connected to the microcomputer's CSI interface. The microcomputer 3a processes the signals coming from the sensor 8 of the receiver 3, which results in the unique IDs of the transmitters 2, and then sends the appropriate X, Y coordinates to a signal processing unit, i.e. in this case the central signal processing unit.

Transmitters 2 and receivers 3 may be positioned in a number of different ways. In a potential configuration, transmitters 2 are mounted in fixed positions, e.g. on the ceiling, and receivers 3 are mounted on the objects 1 whose paths are to be determined through location. The receiver 3 CCD sensors 8 receive the transmitters' 2 IR signals through the lens 9 and the colour filter 10. The objects 1 are located and identified based on the signals received, so if the goal is to move an object 1 to a different location, the difference between the current and target positions can be used to manipulate its controls so that it reaches its destination. Figure lc shows an implementation wherein transmitters 2 and receivers 3 are mounted in fixed positions and also on the objects 1 to identify. The resulting two-way communication allows for locating and identifying the objects 1, and for directing them to the desired positions.

Key benefits of the proposed method are:

Locating precision is significantly higher than in known solutions. - The same technology - light beams - can be used both to locate and to identify objects in a simple way.

Digital (binary) identification allows for tracking multiple objects.

The distance between transmitters and receivers can be greater as in the known solutions.

Since the transmitters and receivers are of optical type, they cause no radio frequency interference.

The transmitters and receivers are not sensitive to radio frequency noise or interference.

It should be pointed out that the steps and devices described herein only represent one possible implementation of the invention, and that a person skilled in the art can use the above specification and exemplary implementation to devise several variant implementations within the scope of the invention.

List of reference signs

1 object

2 transmitter

3 receiver

3a microcomputer

4 signal processing unit

5 energy source

6 signal generator stage

7 light emitting diode, LED

8 sensor

9 lens

10 colour filter

X parameter

Y parameter

101 input image data

102-119 step