Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
MACHINE LEARNING IN A MULTI-UNIT SYSTEM
Document Type and Number:
WIPO Patent Application WO/2018/096544
Kind Code:
A1
Abstract:
A method for machine learning in a multi-unit system includes receiving at a central unit a plurality of parameter sets generated by a plurality of units, each parameter set generated by a different unit, calculating a new parameter set based on the plurality of parameter sets and transmitting the new parameter set to a unit from the plurality of units, to update a machine learning process at the unit.

Inventors:
NATHANIEL RAM (IL)
Application Number:
PCT/IL2017/051289
Publication Date:
May 31, 2018
Filing Date:
November 27, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
POINTGRAB LTD (IL)
NATHANIEL RAM (IL)
International Classes:
G06F19/00; G06N20/00; G06T7/20
Foreign References:
US20140088989A12014-03-27
US20150279051A12015-10-01
US20160148044A12016-05-26
US20150170053A12015-06-18
US20150193695A12015-07-09
Other References:
"Product description, Pointgrab,", HTTPS://WEB.ARCHIVE.ORG/WEB/20161031113337/HTTP://WWW.POINTGRAB.COM/PRODUCT, 31 October 2016 (2016-10-31), XP055488077, Retrieved from the Internet
Download PDF:
Claims:
C laims

1. A mul ti - uni t system, compri si ng

a plurality of units, each unit configured to generate a parameter set by running a local training process using a local database of training examples;

a computing unit to receive the parameter sets generated by the plurality of units and combi ne the pi ural ity of parameter sets to generate a new parameter set and transmit the new parameter set to at least one unit of the mul ti- unit system

2. The system of claim 1 wherein the local database of training examples comprises image data of a space.

3. T he system of clai m 2 comprisi ng a pi ural ity of i mage sensors, to obtai n the

image data of the space, each image sensor associated with a different unit from the plurality of units

4. T he system of clai m 2 wherei n each unit is configured to use the new parameter set to detect an occupant in the image data of the space

5. The system of claim 4 wherein the occupant comprises an object having human moti on character! sti cs.

6. The system of clai m 4 wherei n the occupant comprises an object havi ng

predetermined shape characteristics.

7. The system of clai m 4 wherei n the occupant comprises an object havi ng

predetermined shape and motion characteristics

8. The system of claim 1 wherein the local database of training examples comprises snapshots of a tracker that tracks obj ects i n i mages of a space.

9. T he system of clai m 1 wherei n the computi ng unit is to transmit the new

parameter set to a sub- set of units of the multi-unit system

10. T he system of clai m 9 wherei n the computi ng unit is to generate a pi ural ity of differing new parameter sets, and transmit each different new parameter set to a different sub-set of units of the multi-unit system

11. A method for machi ne learning in a multi-unit system , the method comprising receiving at a central unit a plurality of parameter sets generated by a plurality of units, each parameter set generated by a different unit- calculating a new parameter set based on the plurality of parameter sets; and

transmitting the new parameter set to at least one of the plurality of units, to update a machine learning process at the at least on of the plurality of units.

12. T he method of clai m 11 wherei n the pi ural ity of parameter sets are generated at the plurality of units by running a local training process using a local database of training examples.

13. The method of claim 12 wherein the training examples comprise true images which comprise an occupant and false images which do not comprise an occupant.

14. The method of claim 12 wherein the training examples are generated based on output from a tracki ng modul e.

15. T he method of clai m 11 comprisi ng

calculating a plurality of new parameter sets based on the plurality of parameter sets; and

transmitting each of the new parameter sets to a different sub-set of units.

16. The method of claim 11 wherein the machine learning process is to determine occupancy in a space.

17. A method for determining occupancy in a space, the method comprising

usi ng a processor to I abel i mage data from a f i rst sequence of i mages based on motion detection in the first sequence of images;

usi ng the label ed i mage data to generate a parameter set for a machi ne learning process;

using the parameter set to classify images from a later obtained sequence of images; and

generating a determination of occupancy based on the classification.

18. T he method of clai m 17 wherei n usi ng the processor to I abel i mage data

comprises:

detecti ng an obj ect i n the i mage data; tracking the object throughout the i mage data; and

label i ng the i mage based on the tracki ng of the obj ect

19. T he method of clai m 17 wherei n usi ng the processor to I abel i mage data comprises:

detecti ng an obj ect from the i mage data;

detecting a shape of the object; and

label i ng the i mage data based on the shape of the obj ect.

20. T he method of clai m 17 wherei n usi ng the processor to I abel i mage data comprises

detecting an object i n the image data; and

labeling the image data based on a shape and motion of the object

Description:
MAC HINE L EAR NING IN A M ULTI-UNIT SY ST E M

FIE L D

[0001] The present invention relates to the field of machine learning, typically in a multi- unit system In one embodiment the invention relates to image and scene analysis using machi ne I earni ng techni ques.

BAC K G R OUND

[0002] Computer vision is used to monitor in-door and out-door spaces; in some cases to automatically detect, count and monitor human occupants in a space.

[0003] Computer vision using machine learning techniques enables learning from image data and making predictions based on image data. Machine learning algorithms typically operate by iteratively training a model (also known as ' network , or ' parameter setj using example inputs (typically manually labeled true and false inputs) in order to make data- driven predictions or decisions. During each iteration, at least a part of the data set is examined, an intermediate calculation result is generated and this intermediate result is used to create an updated model. Thus, to create an efficient model, a processor running a learning process should be presented with a large and diverse example input data set

[0004] Creating a large data set for in-door and out-door settings would require collecting many images from possibly private establishments such as office spaces, which could, in addition to using up transmission bandwidth, encounter legal issues such as privacy issues.

SUM MARY

[0005] Embodiments of the invention provide a method and system for creating large and diverse data sets, for example data sets of images or image data, without having to expend transmission bandwidth and without affecting privacy of imaged occupants.

[0006] In one embodiment a method and system are provided for detecting occupants in images and for otherwise analyzing an imaged scene, using machine learning techniques but without impacting privacy of occupants. [0007] In embodiments of the invention data collected by one or more sensor units is used, essentially as a growing distributed database, to improve a machine learning system, however, without transmitting any of the collected data, thereby avoiding issues such as, transmission bandwidth and privacy.

[0008] Some embodiments of the invention enable training a machine learning process by using image based information collected from one or more units, without transmitting any images or visual information, thereby reducing bandwidth utilization and, by preventing access to visual information, avoiding privacy issues.

[0009] A system according to one embodiment of the invention includes at least one sensor unit and a central unit, e.g., a computing unit, in communication with the sensor unit A local database in (or available to) the sensor unit is used to calculate an intermediate result which is transmitted to the computing unit and the computing unit calculates a new, typically improved or up-dated, parameter set based on the intermediate result In some embodiments, a parameter set is generated locally at one or more sensor units by running a local training process at each sensor unit The one or more locally generated parameter set is then transmitted to the computing unit and the computing unit calculates a new, typically improved or up-dated, parameter set based on the locally generated one or more parameter sets.

[0010] The parameter new set which may be transmitted back to the one (or more) sensor unit can then be used by a processor of the sensor unit or other units, for example, to detect an occupant i n i mages of a space or otherwise analyze an i imaged scene i n new i mages.

[0011 ] B oth the i ntermediate result and/or the I ocal ly generated parameter set are generated based on data collected by a sensor unit e.g., image data collected by an image sensor, however they do not contain the collected data (e.g., contain no visual information) and cannot be used to reconstruct the data (eg., images).

[0012] By sharing parameter sets and intermediate results, but not actual data (e.g., visual information), a multi-unit system according to embodiments of the invention enables access to a large database of information collected by different units of the system Since no collected data is being transferred between units, privacy of occupants, for example, in a monitored space, can be maintained. In addition, transmitting parameter sets and intermediate result, but not collected data, according to embodiments of the invention, enables access to a large database of information collected by different units of the system without expending bandwidth due to transmission of largevolumes of data.

[0013] In another embodiment a parameter set may be improved and updated internally within a sensor unit, based on collected data (eg., image or visual data) but without transmitti ng the col I ected data outsi de of the sensor unit

BRIE F DE SC RIPTION OF T H E FIG UR E S

[0014] The invention will now be described in relation to certain examples and embodiments with reference to the following illustrative figures so that it may be more fully understood. In the drawings:

[0015] Fig. 1 is a schematic illustration of a system operable according to embodiments of the invention;

[0016] Fig. 2 is a schematic illustration of components of a system and method, according to an embodiment of the invention;

[0017] Fig. 3 is a schematic illustration of a multi-unit system, according to an embodi ment of the i nventi on;

[0018] Fig. 4 schematically illustrates a method for machine learning in a multi-unit system, according to embodiments of the invention;

[0019] Fig. 5 schematically illustrates a method for training a sensor unit, according to embodi ments of the i nventi on; and

[0020] Fig. 6 is a schematic illustration of a sensor unit operable according to embodi ments of the i nventi on.

DETAIL E D DE SC RIPTION

[0021] Embodiments of the invention provide a method and system for creating and using large and diverse data sets without transmitting collected data. In one embodiment there is provided a method and system for image analysis using machine learning techniques, while preventing access to collected images and maintaining privacy of imaged scenes (e.g., scenes including locations and/or people).

[0022] According to embodiments of the invention image based information, which does not, however, contain any visual information, is used to update and improve units of a multi-unit system

[0023] The terms Visual information , or ' image data_ refer to, inter alia, data such as values that represent the intensity of reflected light as well partial or full images or videos or data that can be used to reconstruct an image.

[0024] One application, which will be exemplified below, is the use of image analysis or imaged scene analysis for occupancy detection, however, other applications may be used according to embodiments of the invention.

[0025] As used herein ' determining occupancy , or ' detecting occupancy , may include detecting an occupant and/or monitoring one or more occupants throughout an imaged space e.g., counting occupants, tracking occupants, determining occupants " location in a space, etc.

[0026] "Occupant" may refer to any type of occupant such as a human and/or ani mal and/or inanimate object

[0027] In the following description, various aspects of the present invention will be described. For purposes of explanation, specific configurations and details are set forth in order to provide a thorough understanding of the present invention. However, it will also be apparent to one skilled i n the art that the present invention may be practiced without the specific details presented herein. Furthermore, well known features may be omitted or simplified i n order not to obscure the present invention.

[0028] Although the following description describes mainly embodiments using computer vision, embodiments of the invention are not limited to the field of image analysis or computer vision, and may be applied to other fields.

[0029] Unless specifically stated otherwise, as apparent from the following discussions, it is appreciated that throughout the specification discussions utilizing terms such as "processing," "computing," "calculati ng," "determining," ' detecting . , ' identifying , or the like, refer to the action and/or processes of a computer or computing system, or similar electronic computing device, that manipulates and/or transforms data represented as physical, such as electronic, quantities within the computing system's registers and/or memories into other data similarly represented as physical quantities within the computing system's memories, registers or other such information storage, transmission or display devices.

[0030] In one embodiment a system is provided which includes a first unit, e.g., a computing unit, in communication with a second unit, e.g., a sensor unit which includes a processing unit running a machine learning process. The computing unit receives an intermediate result from the sensor unit In one embodiment the intermediate result is generated by using a local database at the sensor unit For example, an intermediate result may be generated by a calculation process at the sensor unit.

[0031] In another embodiment a multi-unit system includes a plurality of units, each unit capable of generating a parameter set by running a local training process using a local database of training examples. The system also includes a computing unit to receive the parameter sets generated by the units and to combine the parameter sets to generate a new parameter set. The computing unit may then transmit the new parameter set to one or more units of the multi-unit system

[0032] In one embodiment data, such as image based, audio based or other data is collected from one or more units, to a local database.

[0033] In one embodiment the local database includes true image examples of a space which include an occupant and false image examples of the space which do not include an occupant. Other true and false image examples may be used according to embodi ments of the i nventi on.

[0034] In one embodiment in a multi-unit system which includes a computing unit and one or more sensor units, the computing unit calculates an updated parameter set based on an intermediate result and/or locally generated parameter set which was sent to the computing unit from one or more of the sensor units. The computing unit then transmits the updated parameter set to one or more of the sensor units. The sensor uniĀ¾s) uses the updated parameter set in a machine learning process, for example, to detect an occupant i n i mages of a space and/or to determi ne occupancy i n an i mage of the space. [0035] According to one embodiment a local database of training examples is accumulated automatically in the sensor unit The local database includes true examples, in one embodiment images or image parts, which can be identified with high probability to contain an occupant and false examples, namely, images or image parts identified with high probabi lity not to contai n occupants. T he I ocal database can be created by utilizing computer vision algorithms, by using data from other sensors, or by other means, either manual or automatic.

[0036] An example of a system operable according to embodiments of the invention is schematically illustrated in Fig. 1.

[0037] In one embodiment the system 100 is a multi-unit system including at least one sensor unit 103 and at least one computing unit 105 which is in communication with the sensor unit 103. Sensor unit 103 typically collects data (eg., image data or audio data) from a monitored space. In one embodiment the sensor unit 103 includes or is in communication with an image sensor 113 for obtaining images of a space such as a room 104 or portion of the room 104. Other in-door or out-door spaces may be monitored according to embodiments of the invention.

[0038] Sensor unit 103 typically includes an interface 111 for wired or wireless communication with computing unit 105 and with other additional sensor units and/or other additional computing units (not shown). Computing unit 105 typically includes an i nterface 111 " to enabl e communi cati on wi th sensor uni 1 103 and other sensor uni ts and/or computing units.

[0039] Communication between units of the system 100 may be through a wired connection (e.g., interfaces 111 and 111 " may include a USB or Ethernet port) or wireless link, such as through infrared (IR) communication, radio transmission, Bluetooth technology, ZigBee, Z-Wave and other suitable communication routes.

[0040] In one embodiment the image sensor 113 is associated with a processor 102 and a memory 12, which may be part of the sensor unit 103. Processor 102 runs algorithms and processes for image analysis, e.g., to detect an occupant and determine occupancy in the space (e.g., room 104) based on i nput from i mage sensor 113. [0041] Processor 102 may include, for example, one or more processors and may be a central processing unit (CPU), a digital signal processor (DSP), a microprocessor, a controller, a chip, a microchip, an integrated circuit (IC), or any other suitable multipurpose or specific processor or control ler.

[0042] Memory unit(s) 12 may include, for example, a random access memory (RAM), a dynamic RAM (DRAM), a flash memory, a volatile memory, a non-volatile memory, a cache memory, a buffer, a short term memory unit, a long term memory unit, or other suitable memory units or storage units.

[0043] The processor 102 may run a machine learning process to detect an occupant in images of a space. An occupant may be detected based on properties or characteristics such as motion characteristics, shape, color and other properties or a combination of properties, e.g., based on a combination of motion characteristics and shape. Typically, an occupant exhibits human characteristics, as further detailed below.

[0044] A machine learning process according to one embodiment of the invention may run a set of algorithms that use multiple processing layers on an image to identify desired image features (image features may include any information obtainable from an image, e.g., the existence of objects or parts of objects, their location, their type and more). Each processing layer receives input from the layer below and produces output that is given to the layer above, until the highest layer produces the desired image features. Activity of the different processing layers are typically ruled by a parameter set (which may include sets of adaptive weights e.g., numerical parameters which are typically tuned by a learning algorithm).

[0045] Based on identification of the desired i mage features a shape (or other property) of an object may be determined enabling the system to detect an occupant based on shape (or motion or color, etc.).

[0046] In one embodiment the image sensor 113 is configured to obtain a top view of a space. For example, image sensor 113 may be located on a ceiling of room 104 typically in parallel to the floor of the room to obtain a top view of the room or of part of the room 104. In one embodiment processor 102 generates a parameter set via a training process using a local database of training examples. Processor 102 may transmit the parameter set or an intermediate result from the parameter set to computing unit 105. Computing unit 105 may then calculate a new parameter set based on the parameter set transmitted by processor 102 (and possibly based on parameter sets transmitted by processors from additional units in the system) and/or based on the intermediate results). The computing unit 105 may then transmit the new parameter set back to sensor unit 103 and/or to another unit in the multi-unit system.

[0047] The system 100 may include a plurality of image sensors 113, each to obtain images of the space 104 (typically each image sensor obtains images of different parts of the space), each image sensor associated with a different sensor unit 103. In one embodiment each sensor unit 103 uses the new parameter set transmitted by computing unit 105, to detect an occupant in an image of the space.

[0048] Intermediate results and parameter sets and other signals may be transmitted between units of the system 100, however, no collected data is transmitted between units of the system 100. In one embodiment transmission of images or of any visual information to or from the sensor unit 103 is not enabled. For example, interface 111 i ncl udes no appl i cati on program i nterface (A PI) to enabl e downl oadi ng of i mage data.

[0049] In one embodiment an architecture is installed and maintained at sensor unit 103 and the processor 102 applies a specific parameter set to the architecture maintained at sensor unit 103.

[0050] In one embodiment, which is exemplified in F ig. 2, a system includes a first sensor unit 203 which is in communication with a second unit 205, possibly via a computing unit First sensor unit 203 may transmit image based information (but not visual information) to the second unit 205, the information being used to update units of the system Each update can trigger a gradient descent on the image data input to a learning process at the sensor units. Unit 203 may send to unit 205 information such as: timestamp, network ID, gradient, image data batch statistics, etc. In some embodiments unit 205 may automati cal ly run val i dati on and other processes or may be triggered by an operator to run val i dati on and other operati ons.

[0051] First sensor unit 203 includes a processing unit 202 in communication with an image sensor (e.g., image sensor 113). Images obtained from the image sensor can be saved in local image database 201 (maintained for example, in memory 12) and may include true image data 206 (which may include, for example, full images or portions of images) and false image data (which may include full images or portions of images) 207.

[0052] In one embodiment the true and false image data, 206 and 207 (which may be full images or portions of images) are used in a process to generate a local parameter set and/or an intermediate result of the local parameter set For example, true and false image data 206 and 207 are input to a training process 208 and a local parameter set or an intermediate calculation result 210 is generated from the training process 208. In one example, batches of 1,000 images (or portions of images) are used in training process 208.

[0053] The local parameter set or intermediate calculation result 210 may be transmitted to a second unit of the system, e.g., to second unit 205.

[0054] In one embodiment true and false image data 206 and 207 are generated based on probability scores. For example, images or parts of images that have a probability above a threshold of being true may be saved in a local true database whereas the other images are saved in a local false database.

[0055] In one embodiment a local image database 201 of training examples includes image data of a space. The database 201 may include true image data 206 which may include images (or portions of images) that include an occupant. For example, true image data may include the portion of an image which depicts the occupant or part of the occupant. In one embodiment an occupant is an object exhibiting human characteristics, for example, human motion characteristics (e.g., non-repetitive motion, movements within a predetermined size range, etc.). In another embodiment an occupant is an object having predetermined shape characteristics, for example, a shape of a human, and in some embodi ments a top vi ew shape of a human. In some embodi ments an occupant is an object having a predetermined shape and predetermined motion characteristics.

[0056] In one example, true image data 206 and/or false image data 207 (both which may include full images or parts of images) are generated by taking a snapshot of a tracker (which may be, for example, part of processor 102) that tracks objects in images obtained by image sensor 113. In one example, an image may be labeled ' true_ if the tracked object exists in the images for over 30 seconds and has human motion characteristics. In this case, images obtained within this time frame (e.g., 30 seconds) have a probability above the threshold of being true. In another example an image may be labeled ' false_ if in retrospect of, for example, 1 hour, the tracked object does not have human motion characteristics and/or if the imaged scene includes no motion for a long period of time (e.g., 30 minutes). In this case, images obtained within this time frame (e.g., 1 hour or 30 minutes) have a probability above the threshold of being false. Thus, in one embodiment training examples are generated based on output from a tracking module.

[0057] In one embodiment images from local image database 201 may, in addition or in parallel to being input to training process 208, go through a classification process run by classifier 212 at sensor unit 203. The classifier 212 processes the images, e.g., by running them through a machine learning process and generates output 213. The output 213 may be used, for example, to detect an occupant or determine occupancy in the space from the images. For example, output 213 may include a signal transmitted to a remote device such as an al arm or a dev i ce to di spl ay i nf ormati on rel ati ng to the determi nati on of occupancy.

[0058] Processes carried out by the system (such as processes described below) are run according to an architecture 215 which may be installed in each of the system units or may be shared by the units, for example, the architecture 215 may be installed in one unit and transmitted from on unit to another unit via a central computi ng unit.

[0059] As described above, in one embodiment the local parameter set or intermediate calculation result 210 obtained through training process 208 at sensor unit 203, is transmitted, possibly via a central unit, to second unit 205.

[0060] The local parameter set or intermediate calculation result 210 generated at a first unit 203 can then be used by networks of different units of the system For example, the local parameter set or intermediate calculation result 210 can be used by training process 218 at second unit 205 (or at another sensor unit of the system), together with images from an image database 211 maintained at the second unit 205 (or at another sensor unit of the system).

[0061] The training process 218, run on architecture 215, at second unit 205 may generate a new, improved or up-dated parameter set 220. The new parameter set 220 is improved or up-dated because it is generated based on a local parameter set or intermediate calculation result 210 which was generated at a different sensor unit (e.g., unit 203) but used image data that was available only to the second unit 205 and was not avail able to the unit 203.

[0062] In some embodiments new parameter set 220 can be calculated offline at a typically central computing unit, based on parameter sets and/or intermediate results input from a plurality of sensor units. A new, typically up-dated, parameter set can be calculated at a computing unit by combining (e.g., by averaging) several intermediate results and/or parameter sets.

[0063] New parameter set 220 may be transmitted to sensor unit 203 and/or to other units where the new parameter set 220 is used with machine learning processes.

[0064] Local parameter sets or intermediate calculation result 210 and new parameter set 220 are generated based on image data (e.g., images from local database 201) however they contain no visual information and cannot be used to reconstruct images. Since intermediate results and/or parameter sets, but no visual information, are transferred between units of the system, access to the visual information is prevented, thus maintaining privacy of imaged occupants.

[0065] Images of a space collected at each sensor unit are processed within the sensor unit (e.g., by processing unit 202 and/or by classifier 212) and are not transmitted out of the sensor unit. Thus, image data or any visual information obtained by a sensor unit of systems accordi ng to embodi ments of the i nventi on i s not accessi bl e.

[0066] By sharing intermediate results and/or parameter sets, but not visual information, the methods and systems according to embodiments of the invention enable access to a large database of information collected by several different units (e.g., first unit 203 and second unit 205) but since no visual information is accessible, privacy and/or other rights of imaged occupants are not violated.

[0067] In the example described in F ig. 2 a parameter set (e.g., new parameter set 220) is calculated by using a training process (eg., training process 218) at the second unit 205, however other methods of calculating a parameter set, based on one or more local parameter sets or intermediate results, may be used according to embodiments of the invention.

[0068] In one embodiment calculation of a parameter set (e.g., at a central computing unit) includes using inputs from a plurality of sensor units. For example, a new parameter set 220 may be calculated based on the local parameter set or intermediate calculation results 210 sent from senor unit 203 and based on additional parameter sets and/or intermediate results sent from additional sensor units in the system

[0069] In some embodiments a new parameter set is transmitted to a sub-set of units of the multi-unit system. For example, and as further detailed in Fig. 3 below, a central computing unit can generate a plurality of differing new parameter sets, and transmit each different new parameter set to a different sub-set of units of the multi-unit system, possibly based on predetermined criteria or based on criteria determined in real-time.

[0070] In one embodiment, which is schematically illustrated in Fig. 3, a multi-unit system 300 includes a plurality of units 301, 302 and 303 or sub-sets of units. Unit 301 can receive a local parameter set or intermediate result A from unit 302 and local parameter set or intermediate result B from unit 303 and may calculate a different parameter set C based on the plurality of (possibly different) local parameter sets or intermediate results (A and B) received.

[0071] In one embodiment each of the local parameter sets or intermediate results A and B are generated (e.g., by a training process) using different input images. For example, unit 302 may be in communication with camera 322 which obtains images of room 312 whereas unit 303 is in communication with camera 323 which obtains images of room 313. In this example unit 302 uses images of room 312 as input in a training process to generate local parameter set or intermediate result A whereas unit 303 uses images of room 313 as input in a learning process to generate local parameter set or intermediate result B. Thus, local parameter sets or intermediate results A and B may be typically different.

[0072] In one embodiment unit 301 calculates several different parameters sets. For example, based on local parameter sets or intermediate results A and B, (or even just based on one of A or B) unit 301 may calculate parameter sets C and D. Unit 301 may transmit each of the different parameter sets to a different unit in the system For example, based on local parameter sets or intermediate results input to unit 301 a processor in unit 301 may calculate parameter set C which is more suitable for unit 302 and parameter set D which is more suitable for unit 303. Thus, unit 301 may transmit parameter set C to unit 302 to update unit 302 and parameter D to unit 303 to update unit 303. [0073] The decision, which parameter set is suitable to which unit or sub-set of units, may be based on predetermined criteria, such as geographical or other location of the sub-set, or based on criteria developed in real-time, for example, based on content of images obtained at a sub- set of units.

[0074] A method for machine learning in a multi- unit system, is schematically illustrated in F ig. 4. In one embodiment the method includes receiving a locally generated parameter set or an intermediate result (402), calculating a new parameter set based on the locally generated parameter set or intermediate result (404) and transmitting the new parameter set (406).

[0075] In one embodiment the locally generated parameter set or intermediate result is received in a first unit of a multi-unit system and the new parameter set, which is calculated by a processor of the first unit, is transmitted from the first unit to a second unit of the system to update the machine learning process and/or classification process at the second unit

[0076] In one embodiment the method includes receiving a plurality of locally generated parameter sets or intermediate results (each locally generated parameter set or intermediate result generated by using different training examples) and calculating a single new parameter set based on the plurality of locally generated parameter sets or intermediate results.

[0077] In yet another embodiment the method includes calculating a plurality of (different) parameter sets based on one or more locally generated parameter sets or intermediate results. In this embodiment each of the calculated parameter sets can be transmitted to a different unit or different sub- set of units of the system.

[0078] In one embodiment calculating a new parameter set includes running a training process using a received intermediate result. In another embodiment calculating a new parameter set includes combining (e.g., by calculating an average) locally generated parameter sets. Other or additional mathematical functions may be used to calculate the new parameter set.

[0079] In one embodiment, the method is for image-based machine learning in a multi- unit system In this embodiment the intermediate result may be generated by using input images which include true images that include an occupant and false images that do not include an occupant.

[0080] In one embodiment of the invention a database of training input is generated by a processor using image analysis processes for occupancy detection.

[0081] In one example, which is schematically illustrated in Fig. 5, a processor is used to label an image (or a portion of an image) from a sequence of images as ' true_ or ' false_. These automatically labeled images (or portions) may then be used (e.g., in a training process) in a sensor to detect occupancy. For example, a training process may generate a parameter set to be used by a machine learning process at the sensor and the sensor may thus use the parameter set to classify new images from a new sequence of images and a determination of occupancy may be generated based on this classification.

[0082] In the embodiment described above a database of training input is generated automatically (as opposed to manually). In one embodiment a method for determining occupancy in a space may include using a processor to label image data from a first sequence of images based on motion detection in the first sequence of images. The labeled image data may then be used to generate a parameter set for a machine learning process. The machine learning process or parameter set may be used to classify images from a second sequence of images and a determination of occupancy may be generated based on the classification.

[0083] In the example schematically illustrated in Fig. 5, an image analysis process is applied to one or more images from a sequence of images of a space (502) to determine if an occupant or part of an occupant is depicted in a first image (or portion of the image) from the sequence images. If the first image (or portion of image) includes an occupant (or part of an occupant) (504) then the first image (or portion of image) is labeled ' true_ (506). If the first image (or portion of image) does not include an occupant (or part of occupant) then the image is labeled ' false_ (508). Each labeled image (or portion of image) may then be saved in an appropriate database to be used in a machine learning training process. This automatic labeling process may be repeated for a second (and for additi onal) i mage from the sequence of i mages.

[0084] True and false images may be determined by having a probability above a threshold, e.g., as described above. [0085] An image analysis process for occupancy detection may include motion detection and/or shape detection and/or other image analysis techniques.

[0086] For example, if an object suspected to be an occupant is detected in a first image from a sequence of images, the object may be tracked throughout later images of the sequence and the first image can be labeled based on the tracking (e.g., if the tracking revealed motion typical of a human the first image may then be labeled ' truej. In another example, an image may be labeled by applying a shape detection algorithm on the image (or several images) (e.g., if an image (or portion of image) includes an object having a shape of a human then that image may be labeled ' truej. In other examples, an image may be labeled by applying a combination of algorithms, e.g., shape detection and motion detection algorithms.

[0087] In one embodiment the automatically labeled images (or portions of images) may be input to a training process, which may generate a local parameter set or an intermediate result for a machine learning process to use in order to classify images from a second (typically later) sequence of images to determine occupancy in a space based on the second sequence of images.

[0088] In one embodiment which is schematically illustrated in Fig. 6, a device 500 (which may be, for example, a stand-alone unit or a unit in a multi-unit system) may include a processor 602 to label an image (or portion of image) from a sequence of images of a space and to self-train by using a first sequence of images to improve classification of a second sequence of images, without having to transmit or receive images from an external source.

[0089] In one embodiment the device 600 includes processor 602 and image sensor 603 which is in communication with the processor 602. In another embodiment the image sensor 603 may be remote and not necessarily part of the device 600.

[0090] In one embodiment image sensor 603 captures a first sequence of images which may be kept in a first local image database 661. The images from database 661 are processed by processor 602 such that images (or portions of the images) are automatically labeled (612) (e.g., true images contain an occupant whereas false images do not contain an occupant, true images contain a predetermined shape (e.g., a shape of a standing or sitting occupant) whereas false images do not contain the predetermined shape, etc.). The labeled images are then input to a process, e.g., a training process (613) run by processor 602 and a first parameter set or an intermediate result 614 is generated based on the training process (613). The first parameter set or intermediate result 614 is then used in a machine learning process, for example, as described above, for classifying images from a second sequence of images. For example, the second sequence of images may include images obtained from image sensor 603 at a later time than the first sequence of images and which may be kept in a second image database 662. In one embodiment images or parts of images from database 662 are classified (615) to determine occupancy (616). Based on the determination of occupancy (616) an occupancy signal may be output The output 617 may be a signal such as an audio or visual signal to alert an operator. Alternatively or in addition, the output 617 may be a signal transmitted to a remote device such as an alarm or a device to display information relating to the determination of occupancy (616). In other embodiments the output 617 may be a signal to operate or modulate a HVAC (heating, ventilation and air conditioning) device or other environment comfort devices, based on the determination of occupancy (616)

[0091] Thus, according to embodiments of the invention, images and/or visual information are processed within units (stand-alone units or units that are part of a multi- unit system) and may be used to improve image analysis (e.g., by running a training process at a learning machine). However, the images and/or visual information are not transmitted out of the unit, thereby avoiding violating rights related to the images and reducing transmission bandwidth utilization.

[0092] By sharing parameter sets and intermediate results, but not collected data (eg., visual information), the stand alone units and/or multi-unit system according to embodiments of the invention enables access to a large database of information collected over time or from different units of the system while maintaining privacy of occupants in a monitored space.