Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEM AND METHOD FOR REMOTE MULTIMODAL SENSING OF A MEASUREMENT VOLUME
Document Type and Number:
WIPO Patent Application WO/2022/168091
Kind Code:
A1
Abstract:
A method of remote multimodal sensing of a measurement volume, including: detecting, by at least one optical detector, light from a measurement volume; generating, by the at least one optical detector, an optical output dataset based on at least a portion of the detected light; receiving, by at least one electromagnetic (EM) receiver, one or more EM signals from the measurement volume; generating, by the at least one EM receiver, an EM output dataset based on at least one of the one or more received EM signals; and determining, by a processing unit, a multiparameter output dataset based on at least a portion of the optical output dataset and at least a portion of the EM output dataset.

Inventors:
GABAI HANIEL (IL)
ALONI SHARONE (IL)
COHEN ADIV BINYAMIN (IL)
AVRAHAM JONATHAN (IL)
OPHIR NIMROD (IL)
KEMPLER ITAMAR ARIE (IL)
SIROTA VADIM (IL)
Application Number:
PCT/IL2022/050145
Publication Date:
August 11, 2022
Filing Date:
February 03, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ELBIT SYSTEMS C4I AND CYBER LTD (IL)
International Classes:
G06V10/80; A61B5/00; A61B5/02; A61B5/08; A61B5/117; G01S13/86; G06K9/62
Domestic Patent References:
WO2019198076A12019-10-17
Foreign References:
US20030186663A12003-10-02
US9097800B12015-08-04
US10663577B12020-05-26
US20200134395A12020-04-30
US20200309894A12020-10-01
CN111166342A2020-05-19
Other References:
RONG YU; GUTIERREZ RICHARD; MISHRA KUMAR VIJAY; BLISS DANIEL W.: "Noncontact Vital Sign Detection With UAV-Borne Radars: An Overview of Recent Advances", IEEE VEHICULAR TECHNOLOGY MAGAZINE, IEEE,, US, vol. 16, no. 3, 9 July 2021 (2021-07-09), US , pages 118 - 128, XP011872558, ISSN: 1556-6072, DOI: 10.1109/MVT.2021.3086442
VAKIL ASAD; LIU JENNY; ZULCH PETER; BLASCH ERIK; EWING ROBERT; LI JIA: "Feature Level Sensor Fusion for Passive RF and EO Information Integration", 2020 IEEE AEROSPACE CONFERENCE, IEEE, 7 March 2020 (2020-03-07), pages 1 - 9, XP033812535, DOI: 10.1109/AERO47225.2020.9172254
REGEV NIR, WULICH DOV: "Multi-Modal, Remote Breathing Monitor", SENSORS, vol. 20, no. 4, pages 1229, XP055955755, DOI: 10.3390/s20041229
Attorney, Agent or Firm:
WEILER, Assaf et al. (IL)
Download PDF:
Claims:
CLAIMS A method of remote multimodal sensing of a measurement volume, the method comprising: detecting, by at least one optical detector, light from a measurement volume; generating, by the at least one optical detector, an optical output dataset based on at least a portion of the detected light; receiving, by at least one electromagnetic (EM) receiver, one or more EM signals from the measurement volume; generating, by the at least one EM receiver, an EM output dataset based on at least one of the one or more received EM signals; and determining, by a processing unit, a multiparameter output dataset based on at least a portion of the optical output dataset and at least a portion of the EM output dataset. The method of claim 1, comprising illuminating at least a portion of the measurement volume with one or more light beams from at least one light source. The method of any one of claims 1-2, comprising transmitting, by at least one EM transmitter, one or more transmitted EM signals into at least a portion of the measurement volume. The method of any one of claims 1-3, comprising determining the multiparameter output dataset by fusing at least a portion of the optical output dataset and at least a portion of the EM output dataset. The method of any one of claims 1-4, comprising determining, by the processing unit, a contribution of each of the optical output dataset and the EM output dataset to the multiparameter output dataset based on at least one of: a distance of the at least one optical detector and the at least one EM receiver to the measurement volume; environmental conditions; a line of sight (LOS) of the at least one optical detector to the measurement volume; and a signal-to-noise ratio (SNR) of the optical output dataset and the EM output dataset. The method of any one of claims 1-5, comprising determining, by the processing unit, one or more activities in the measurement volume based on at least a portion of the multiparameter output dataset. The method of any one of claims 1-6, comprising classifying, by the processing unit, one or more activities in the measurement volume based on at least a portion of the multiparameter output dataset. The method of any one of claims 6 and 7, wherein the one or more activities comprising at least one of: one or more human-induced activities, one or more machine-induced activities, and one or more environmental activities. The method of any one of claims 1-8, comprising detecting, by the processing unit, one or more subjects in the measurement volume based on at least a portion of the multiparameter output dataset. The method of any one of claims 1-9, comprising detecting, by the processing unit, a number of subjects in the measurement volume based on at least a portion of the multiparameter output dataset. The method of any one of claims 1-10, comprising identifying, by the processing unit, at least one subejct of one or more subjects in the measurement volume based on at least a portion of the multiparameter output dataset. The method of any one of claims 1-11, comprising detecting, by the processing unit, one or more biometrically identifiable activities of at least one subejct of one or more subjects in the measurement volume based on at least a portion of the multiparameter output dataset. The method of claim 12, comprising identifying, by the processing unit, at least one subject of the one or more subjects in the measurement volume based on at least one of the one or more biometrically identifiable activities detected for the respective at least one subejct. The method of any one of claims 12-13, wherein the one or more biometrically identifiable activities comprising at least one of voice activity, respiratory activity and cardiac activity. A system for remote multimodal sensing of a measurement volume, the system comprising: an optical sensing unit comprising at least one optical detector configured to: detect light from a measurement volume; and generate an optical output dataset based on at least a portion of the detected light; an electromagnetic (EM) sensing unit comprising at least one EM receiver configured to: receive one or more EM signals from the measurement volume; and generate an EM output dataset based on at least one of the one or more received EM signals; and a processing unit configured to determine a multiparameter output dataset based on at least a portion of the optical output dataset and at least a portion of the EM output dataset. The system of claim 15, wherein the optical sensing unit comprises at least one light source configured to illuminate at least a portion of the measurement volume with one or more light beams. The system of any one of claims 15-16, wherein the EM sensing unit comprising at least one EM transmitter configured to transmit one or more transmitted EM signals into at least a portion of the measurement volume. The system of any one of claims 15-17, wherein the processing unit is configured to determine the multiparameter output dataset by fusing at least a portion of the optical output dataset and at least a portion of the EM output dataset. The system of any one of claims 15-18, wherein the processing unit is configured to determine a contribution of each of the optical output dataset and the EM output dataset to the multiparameter output dataset based on at least one of: a distance of the at least one optical detector and the at least one EM receiver to the measurement volume; environmental conditions; a line of sight (LOS) of the at least one optical detector to the measurement volume; and a signal-to-noise ratio (SNR) of the optical output dataset and the EM output dataset. The system of any one of claims 15-19, wherein the processing unit is configured to determine one or more activities in the measurement volume based on at least a portion of the multiparameter output dataset. The system of any one of claims 15-20, wherein the processing unit is configured to classify, by the processing unit, one or more activities in the measurement volume based on at least a portion of the multiparameter output dataset. The system of any one of claims 20 and 21, wherein the one or more activities comprising at least one of: one or more human-induced activities, one or more machine-induced activities, and one or more environmental activities. The system of any one of claims 15-22, wherein the processing unit is configured to detect one or more subjects in the measurement volume based on at least a portion of the multiparameter output dataset. The system of any one of claims 15-23, wherein the processing unit is configured to detect a number of subjects in the measurement volume based on at least a portion of the multiparameter output dataset. The system of any one of claims 15-24, wherein the processing unit is configured to identify at least one subejct of one or more subjects in the measurement volume based on at least a portion of the multiparameter output dataset. The system of any one of claims 15-25, wherein the processing unit is configured to detect one or more biometrically identifiable activities of at least one subejct of one or more subjects in the measurement volume based on at least a portion of the multiparameter output dataset. The system of claim 26, wherein the processing unit is configured to identify at least one subject of the one or more subjects in the measurement volume based on at least one of the one or more biometrically identifiable activities detected for the respective at least one subejct.

28. The system of any one of claims 26-27, wherein the one or more biometrically identifiable activities comprising at least one of voice activity, respiratory activity and cardiac activity.

Description:
SYSTEM AND METHOD FOR REMOTE MULTIMODAL SENSING

OF A MEASUREMENT VOLUME

FIELD OF THE INVENTION

The present invention relates to the field of systems and method for remote sensing of a measurement volume, and more particularly, to multimodal systems thereof.

BACKGROUND OF THE INVENTION

Some current systems utilize image-based technology for remote identification of an object. However, such systems typically require advanced and complex post processing of the images. Moreover, the performance of such systems significantly degrades when the appearance of the object is at least partly masked, e.g., intentionally, and/or due to uncontrolled environmental conditions. Furthermore, the performance of such systems may also depend on a pose of the object with respect to the system.

Some other systems utilize laser-based technology for remote identification of the object. However, such systems require direct line-of-sight (LOS) to the object. Moreover, the performance of such systems strongly depends on the environmental conditions. For example, the performance of such systems may degrade in rainy, foggy, etc. conditions.

Some other systems utilize radiofrequency-based technology for remote identification of the object. Such systems typically do not require direct LOS to the object and are less sensitive to environmental conditions as compared to, for example, systems utilizing laser-based technologies and/or image -based technologies. However, such systems typically have limited performance range as compared to systems utilizing laser-based technologies and/or image-based technologies. Moreover, extraction of at least some of parameters based on radiofrequency-based technologies may be more complex as compared to laser-based technologies and/or image-based technologies.

SUMMARY OF THE INVENTION

Some embodiments of the present invention may provide a method of remote multimodal sensing of a measurement volume, the method may include: detecting, by at least one optical detector, light from a measurement volume; generating, by the at least one optical detector, an optical output dataset based on at least a portion of the detected light; receiving, by at least one electromagnetic (EM) receiver, one or more EM signals from the measurement volume; generating, by the at least one EM receiver, an EM output dataset based on at least one of the one or more received EM signals; and determining, by a processing unit, a multiparameter output dataset based on at least a portion of the optical output dataset and at least a portion of the EM output dataset.

Some embodiments may include illuminating at least a portion of the measurement volume with one or more light beams from at least one light source.

Some embodiments may include transmitting, by at least one EM transmitter, one or more transmitted EM signals into at least a portion of the measurement volume.

Some embodiments may include determining the multiparameter output dataset by fusing at least a portion of the optical output dataset and at least a portion of the EM output dataset.

Some embodiments may include determining, by the processing unit, a contribution of each of the optical output dataset and the EM output dataset to the multiparameter output dataset based on at least one of: a distance of the at least one optical detector and the at least one EM receiver to the measurement volume; environmental conditions; a line of sight (LOS) of the at least one optical detector to the measurement volume; and a signal-to-noise ratio (SNR) of the optical output dataset and the EM output dataset.

Some embodiments may include determining, by the processing unit, one or more activities in the measurement volume based on at least a portion of the multiparameter output dataset.

Some embodiments may include classifying, by the processing unit, one or more activities in the measurement volume based on at least a portion of the multiparameter output dataset.

In some embodiments, the one or more activities may include at least one of: one or more human- induced activities, one or more machine-induced activities, and one or more environmental activities.

Some embodiments may include detecting, by the processing unit, one or more subjects in the measurement volume based on at least a portion of the multiparameter output dataset.

Some embodiments may include detecting, by the processing unit, a number of subjects in the measurement volume based on at least a portion of the multiparameter output dataset.

Some embodiments may include identifying, by the processing unit, at least one subejct of one or more subjects in the measurement volume based on at least a portion of the multiparameter output dataset.

Some embodiments may include detecting, by the processing unit, one or more biometrically identifiable activities of at least one subejct of one or more subjects in the measurement volume based on at least a portion of the multiparameter output dataset. Some embodiments may include identifying, by the processing unit, at least one subject of the one or more subjects in the measurement volume based on at least one of the one or more biometrically identifiable activities detected for the respective at least one subejct.

In some embodiments, the one or more biometrically identifiable activities may include at least one of voice activity, respiratory activity and cardiac activity.

Some embodiments of the present invention may provide a system for remote multimodal sensing of a measurement volume, the system may include: an optical sensing unit may include at least one optical detector configured to: detect light from a measurement volume; and generate an optical output dataset based on at least a portion of the detected light; an electromagnetic (EM) sensing unit may include at least one EM receiver configured to: receive one or more EM signals from the measurement volume; and generate an EM output dataset based on at least one of the one or more received EM signals; and a processing unit configured to determine a multiparameter output dataset based on at least a portion of the optical output dataset and at least a portion of the EM output dataset.

In some embodiments, the optical sensing unit may include at least one light source configured to illuminate at least a portion of the measurement volume with one or more light beams.

In some embodiments, the EM sensing unit may include at least one EM transmitter configured to transmit one or more transmitted EM signals into at least a portion of the measurement volume.

In some embodiments, the processing unit is configured to determine the multiparameter output dataset by fusing at least a portion of the optical output dataset and at least a portion of the EM output dataset.

In some embodiments, the processing unit is configured to determine a contribution of each of the optical output dataset and the EM output dataset to the multiparameter output dataset based on at least one of: a distance of the at least one optical detector and the at least one EM receiver to the measurement volume; environmental conditions; a line of sight (LOS) of the at least one optical detector to the measurement volume; and a signal-to-noise ratio (SNR) of the optical output dataset and the EM output dataset.

In some embodiments, the processing unit is configured to determine one or more activities in the measurement volume based on at least a portion of the multiparameter output dataset.

In some embodiments, the processing unit is configured to classify, by the processing unit, one or more activities in the measurement volume based on at least a portion of the multiparameter output dataset. In some embodiments, the one or more activities may include at least one of: one or more human- induced activities, one or more machine-induced activities, and one or more environmental activities.

In some embodiments, the processing unit is configured to detect one or more subjects in the measurement volume based on at least a portion of the multiparameter output dataset.

In some embodiments, the processing unit is configured to detect a number of subjects in the measurement volume based on at least a portion of the multiparameter output dataset.

In some embodiments, the processing unit is configured to identify at least one subejct of one or more subjects in the measurement volume based on at least a portion of the multiparameter output dataset.

In some embodiments, the processing unit is configured to detect one or more biometrically identifiable activities of at least one subejct of one or more subjects in the measurement volume based on at least a portion of the multiparameter output dataset.

In some embodiments, the processing unit is configured to identify at least one subject of the one or more subjects in the measurement volume based on at least one of the one or more biometrically identifiable activities detected for the respective at least one subejct.

In some embodiments, the one or more biometrically identifiable activities may include at least one of voice activity, respiratory activity and cardiac activity.

These, additional, and/or other aspects and/or advantages of the present invention are set forth in the detailed description which follows; possibly inferable from the detailed description; and/or learnable by practice of the present invention.

BRIEF DESCRIPTION OF THE DRAWINGS

For a better understanding of embodiments of the invention and to show how the same can be carried into effect, reference will now be made, purely by way of example, to the accompanying drawings in which like numerals designate corresponding elements or sections throughout.

In the accompanying drawings:

Fig. 1 is a block diagram of a system for remote multimodal sensing of a measurement volume, according to some embodiments of the invention; and

Fig. 2 is a flowchart of a method of remote multimodal sensing of a measurement volume, according to some embodiments of the invention.

It will be appreciated that, for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements.

DETAILED DESCRIPTION OF THE INVENTION

In the following description, various aspects of the present invention are described. For purposes of explanation, specific configurations and details are set forth in order to provide a thorough understanding of the present invention. However, it will also be apparent to one skilled in the art that the present invention can be practiced without the specific details presented herein. Furthermore, well known features can have been omitted or simplified in order not to obscure the present invention. With specific reference to the drawings, it is stressed that the particulars shown are by way of example and for purposes of illustrative discussion of the present invention only and are presented in the cause of providing what is believed to be the most useful and readily understood description of the principles and conceptual aspects of the invention. In this regard, no attempt is made to show structural details of the invention in more detail than is necessary for a fundamental understanding of the invention, the description taken with the drawings making apparent to those skilled in the art how the several forms of the invention can be embodied in practice.

Before at least one embodiment of the invention is explained in detail, it is to be understood that the invention is not limited in its application to the details of construction and the arrangement of the components set forth in the following description or illustrated in the drawings. The invention is applicable to other embodiments that can be practiced or carried out in various ways as well as to combinations of the disclosed embodiments. Also, it is to be understood that the phraseology and terminology employed herein is for the purpose of description and should not be regarded as limiting.

Unless specifically stated otherwise, as apparent from the following discussions, it is appreciated that throughout the specification discussions utilizing terms such as "processing", "computing", "calculating", "determining", “enhancing” or the like, refer to the action and/or processes of a computer or computing system, or similar electronic computing device, that manipulates and/or transforms data represented as physical, such as electronic, quantities within the computing system's registers and/or memories into other data similarly represented as physical quantities within the computing system's memories, registers or other such information storage, transmission or display devices. Any of the disclosed modules or units can be at least partially implemented by a computer processor. Reference is now made to Fig. 1, which is a block diagram of a system 100 for remote multimodal sensing of a measurement volume, according to some embodiments of the invention.

According to some embodiments of the invention, system 100 may include an optical sensing unit 110. Optical sensing unit 110 may include an optical detector 112. Optical detector 112 may detect light 92 from measurement volume 90. Optical detector 112 may generate an optical output dataset 112a based on at least a portion of detected light 92. Optical detector 112 may, for example, include a single detector or an array of detectors. Optical detector 112 may, for example, include a camera, an RGB camera, a thermal camera, etc.

In some embodiments, optical sensing unit 110 may be passive sensing unit. For example, optical sensing unit 110 may have no active light sources. Passive optical sensing unit 110 may, for example, provide discrete operation of system 100 because it may be difficult to detect passive optical sensing unit 110.

In some embodiments, optical sensing unit 110 may be active sensing unit. In some embodiments, optical sensing unit 110 may include a light source 114. Light source 114 may illuminate at least a portion of measurement volume 90 with one or more light beams 114a, and optical detector 112 may detect light 92 from measurement volume 90 and generate optical output dataset 112a based on at least a portion of detected light 92. Light source 114 may, for example, include laser (e.g., pulsed, CW, FM laser, laser implementing a wide range of wavelengths, etc.). Active optical sensing unit 110 may, for example, enable controlling the illumination of measurement volume 90 (e.g., to ensure interaction of the illumination with objects within measurement volume 90) and/or may increase a signal to noise ratio (SNR) (e.g., due to relatively high energetic probe waveform, etc.).

Optical output dataset 112a may include one or more optical output subsets of data values. In some embodiments, the data values of at least one of the optical output subset(s) may be indicative of one or more activities in measurement volume 90. For example, the one or more activities may be one or more human-induced activities, one or more machine-induced activities, one or more environmental activities, etc. The one or more human-induced activities may, for example, include running, walking, kneeling, shooting, etc. The one or more machine-induced activities may, for example, include power tool operation, etc. The one or more environmental activities may, for example, include traffic, winds, etc.

In some embodiments, the data values of at least one of the optical output subset(s) may be indicative of a presence of one or more subjects in measurement volume 90. In some embodiments, the data values of at least one of the optical output subset(s) may be indicative of one or more biometrically identifiable activities of at least one of the subject(s) in measurement volume 90. The one or more biometrically identifiable activities may, for example, include a voice activity, respiratory activity, cardiac activity, etc.

According to some embodiments of the invention, system 100 may include an electromagnetic (EM) sensing unit 120.

EM sensing unit 120 may include an EM receiver 122. EM receiver 122 may receive one or more EM signals 94 from measurement volume 90. For example, EM receiver 122 may be a radiofrequency (RF) receiver capable of receiving RF signals 94. EM receiver 122 may generate an EM output dataset 122a based on at least a portion of received EM signal(s) 94.

In some embodiments, EM sensing unit 120 may be passive sensing unit. For example, EM sensing unit 120 may have no active EM transmitters. Passive EM sensing unit 120 may, for example, provide discrete operation of system 100 because it may be difficult to detect passive EM sensing unit 120.

In some embodiments, EM sensing unit 120 may be active sensing unit. In some embodiments, EM sensing unit 120 may include an EM transmitter 124. EM transmitter 124 may transmit one or more transmitted EM signal 124a into at least a portion of measurement volume 90, and EM receiver 122 may receive one or more EM signals 94 from measurement volume 90 and generate EM output dataset 122a based on at least a portion of received EM signal(s) 94. Active EM sensing unit 120 may, for example, enable controlling the transmission of EM signal(s) 124a to measurement volume 90 (e.g., to ensure interaction of EM signal(s) 124a with objects within measurement volume 90) and/or may increase a signal to noise ratio (SNR).

EM output dataset 122a may include one or more EM output subsets of data values. In some embodiments, the data values of at least one of the EM output subset(s) may be indicative of one or more activities in measurement volume 90 (e.g., one or more human-induced activities, one or more machine-induced activities, one or more environmental activities, etc., as described above).

In some embodiments, the data values of at least one of the EM output subset(s) may be indicative of a presence of one or more subjects in measurement volume 90 (e.g., as described above).

In some embodiments, the data values of at least one of the EM output subset(s) may be indicative of one or more biometrically identifiable activities of at least one of the subject(s) in measurement volume 90 (e.g., voice activity, respiratory activity, cardiac activity, etc., as described above).

According to some embodiments of the invention, system 100 may include a multiparameter dataset determination module 132. Multiparameter dataset determination module 132 may determine a multiparameter output dataset 132a based on at least a portion of optical output dataset 112a and at least a portion of EM output dataset 122a.

Multiparameter output dataset 132a may include one or more multiparameter output subsets of data values. In some embodiments, the data values of at least one of the multiparameter output subset(s) may be indicative of one or more activities in measurement volume 90 (e.g., one or more human- induced activities, one or more machine -induced activities, one or more environmental activities, etc., as described above).

In some embodiments, the data values of at least one of the multiparameter output subset(s) may be indicative of a presence of one or more subjects in measurement volume 90 (e.g., as described above).

In some embodiments, the data values of at least one of the multiparameter output subset(s) may be indicative of one or more biometrically identifiable activities of at least one of the subject(s) in measurement volume 90 (e.g., voice activity, respiratory activity, cardiac activity, etc., as described above).

In some embodiments, multiparameter dataset determination module 132 may fuse at least a portion of optical output dataset 112a with at least a portion of EM output dataset 122a to determine multiparameter output dataset 132a. For example, multiparameter dataset determination module 132 may fuse at least a portion of optical output dataset 112a with at least a portion of EM output dataset 122a using one or more machine learning algorithms to determine multiparameter output dataset 132a. In another example, multiparameter dataset determination module 132 may fuse at least a portion of optical output dataset 112a with at least a portion of EM output dataset 122a using one or more deep learning algorithms to determine multiparameter output dataset 132a. In another example, multiparameter dataset determination module 132 may fuse at least a portion of a weighted optical output dataset 112a with at least a portion of weighted EM output dataset 122a to determine multiparameter output dataset 132a.

In some embodiments, multiparameter dataset determination module 132 may determine a contribution of each of optical output dataset 112a and EM output dataset 122a to multiparameter output dataset 132a based on at least one of a distance of system 100 to measurement volume 90, environmental conditions, and a line of sight (LOS) of system 100 to measurement volume 90.

For example, the farther system 100 from measurement volume 90, the greater the contribution of optical output dataset 112a to multiparameter output dataset 132a than the contribution of EM output dataset 122a. In another example, the worse the environmental conditions (e.g., raining, foggy weather, etc.), the greater the contribution of EM output dataset 122a to multiparameter output dataset 132a than the contribution of optical output dataset 112a. In another example, the more conclusive the LOS of system 100 to measurement volume 90, the greater the contribution of optical output dataset 112a to multiparameter output dataset 132a than the contribution of optical sensing unit 110.

In some embodiments, multiparameter dataset determination module 132 may determine a contribution of each of optical output dataset 112a and EM output dataset 122a to multiparameter output dataset 132a based on a signal-to-noise ratio (SNR) of each of optical output dataset 112a and EM output dataset 122a. For example, the higher the SNR of one of optical output dataset 112a and EM output dataset 122a, the greater the contribution thereof to multiparameter output dataset 132a.

According to some embodiments of the invention, system 100 may include an activity detection and classification module 134. Activity detection and classification module 134 may detect one or more activities in measurement volume 90 based on at least a portion of multiparameter output dataset 132a. In some embodiments, activity detection and classification module 134 may classify one or more activities in measurement volume 90 based on at least a portion of multiparameter dataset 132a.

In some embodiments, activity detection module 134 may classify one or more activities in measurement volume 90 based on multiparameter dataset 132a and a reference multiparameter dataset 142. For example, the one or more activities may be one or more human-induced activities, one or more machine induced activities, one or more environmental activities, etc. (e.g., as described hereinabove). Reference multiparameter dataset 142 may, for example, include multiple reference sets of data values. The reference sets may be accompanied with metadata describing the activities for which the respective sets have been obtained. For example, reference multiparameter dataset 142 may include one or more first reference optical sets and one or more first reference EM sets obtained for a first activity (e.g., walking of one or more subjects). In another example, reference multiparameter dataset 142 may include one or more second reference optical sets and one or more second reference EM sets obtained for a second activity (e.g., operation of a power tool). Activity detection and classification module 134 may, for example, compare at least a portion of multiparameter output dataset 132a with at least a portion of the reference sets of reference multiparameter dataset 142 and classify at least one of the one or more activities in measurement volume 90 based on the comparison thereof.

In various embodiments, activity detection and classification module 134 may detect and/or classify one or more activities in measurement volume 90 based on at least a portion of multiparameter output dataset 132a and using one or more artificial intelligence algorithms. For example, activity detection and classification module 134 may detect and/or classify the one or more activities in measurement volume 90 using one or more machine learning algorithms and/or using one or more deep learning algorithms (e.g., such as (e supervise, supervised or reinforcement learning machine learning classifier algorithms).

According to some embodiments of the invention, system 100 may include a database 140. Database 140 may, for example, store reference multiparameter dataset 142. In some embodiments, database 140 may be external to system 100.

According to some embodiments of the invention, system 100 may include a subject detection and identification module 136. In some embodiments, subject detection and identification module 136 may detect one or more subjects in measurement volume 90.

In some embodiments, subject detection and identification module 136 may detect one or more subjects in measurement volume 90 based on at least a portion of multiparameter output dataset 132a and at least a portion of reference multiparameter dataset 142. For example, reference multiparameter dataset 142 may include one or more third reference optical sets and one or more third reference EM sets obtained for measurement volume 90 having one or more subjects. Subject detection and identification module 136 may compare at least a portion of multiparameter output dataset 132a with at least a portion of the reference sets of reference multiparameter dataset 142 and detect the presence of one or more subjects in measurement volume 90 based on the comparison thereof.

In some embodiments, subject detection and identification module 136 may detect one or more subjects in measurement volume 90 based on at least a portion of multiparameter output dataset 132a and using one or more artificial intelligence algorithms (e.g., one or more machine learning algorithms and/or one or more deep learning algorithms).

In some embodiments, subject detection and identification module 136 may determine that there are two or more subjects in measurement volume 90 based on at least a portion of multiparameter output dataset 132a. For example, subject detection and identification module 136 may compare at least some of the multiparameter output subsets of multiparameter output dataset 132a and determine based on the comparison thereof that at least some of the multiparameter output subsets thereof relate to different subjects. In some embodiments, subject detection and identification module 136 may determine a number of the subjects in measurement volume 90. In some embodiments, subject detection and identification module 136 may identify at least one subejct of one or more subjects in measurement volume 90 based on at least a portion of multiparameter output dataset 132a.

In some embodiments, subject detection and identification module 136 may detect one or more biometrically identifiable activities of at least one subejct of the one or more subjects in measurement volume 90 based on at least a portion of multiparameter output dataset 132a. The one or more biometrically identifiable activities may, for example, include a voice activity, respiratory activity, cardiac activity, etc. In some embodiments, subject detection and identification module 136 may identify the at least one subejct of the one or more subjects in measurement volume 90 based on at least one of the one or more biometrically identifiable activities detected for the respective at least one subject.

In some embodiments, subject detection and identification module 136 may detect one or more biometrically identifiable activities of at least one subejct of the one or more subjects in measurement volume 90 based on at least a portion of multiparameter output dataset 132a and at least a portion of reference multiparameter dataset 142. For example, reference multiparameter dataset 142 may include one or more fourth reference optical sets and one or more fourth reference EM sets obtained for a first biometrically identifiable activity (e.g., cardiac activity). Subject detection and identification module 136 may compare multiparameter output dataset 132a with at least a portion of the reference sets of reference multiparameter dataset 142 and detect the first biometrically identifiable activity in measurement volume 90.

In some embodiments, subject detection and identification module 136 may detect one or more biometrically identifiable activities of at least one subejct of the one or more subjects in measurement volume 90 based on at least a portion of multiparameter output dataset 132a and using one or more artificial intelligence algorithms.

In some embodiments, subject detection and identification module 136 may identify at least one subject of the one or more subjects in measurement volume 90 based on at least one of the one or more biometrically identifiable activities detected for the respective at least one subject and based on at least a portion of a reference biometric multiparameter dataset 144. For example, reference biometric multiparameter dataset 144 may include multiple reference biometric sets of data values. The reference biometric sets may be accompanied with metadata describing the biometrically identifiable activities and identification data of subjects for which the respective sets have been obtained. For example, reference biometric multiparameter dataset 144 may include one or more first biometric reference optical sets and one or more first reference biometric EM sets obtained for the first biometrically identifiable activity (e.g., cardiac activity) of a first subject. In another example, reference biometric multiparameter dataset 144 may include one or more second reference biometric optical sets and one or more second reference EM biometric sets obtained for a second biometrically identifiable activity (e.g., voice activity) of a second subject. Subject detection and identification module 136 may compare at least a portion of multiparameter output dataset 132a with at least a portion of the reference sets of reference multiparameter dataset 142 and identify the at least one subejct of the one or more subjects based on the comparison thereof. In some embodiments, subject detection and identification module 136 may identify at least one subejct of the one or more subjects in measurement volume 90 based on at least one of the one or more biometrically identifiable activities detected for the respective at least one subejct and using one or more artificial intelligence algorithms.

In some embodiments, system 100 may include a notification module 138. Notification module 138 may generate one or more notifications concerning at least one of the one or more detected activities in measurement volume 90, the presence of one or more subjects in measurement volume 90, the number of the subjects in measurement volume 90 and the identity of at least one of the one or more subjects in measurement volume 90.

In some embodiments, system 100 may include a control module 139. Control module 139 may control operation of optical sensing unit 110 and EM sensing unit 120.

In various embodiments, optical sensing unit 110 may include two or more optical detectors and/or two or more light sources. In various embodiments, EM sensing unit 120 may include two or more EM receivers and/or two or more EM transmitters. In various embodiments, system 100 may include two or more optical sensing units 110 and/or two or more EM sensing units 120. This may, for example, enhance sensing capabilities of system 100 and provide better resolution of activity detection and/or of subejct detection and identification.

As would be apparent to those of ordinary skill in the art, each module / database in system 100 may be implemented on its own computing device, a single computing device, or a combination of computing devices. The communication between the elements of system 100 may be wired and/or wireless.

Reference is now made to Fig. 2, which is a flowchart of a method of remote multimodal sensing of a measurement volume, according to some embodiments of the invention.

Some embodiments may include detecting 202, by at least one optical detector, light from a measurement volume (e.g., by optical detector 112 as described above with respect to Fig. 1). Some embodiments may include generating 204, by the at least one optical detector, an optical output dataset based on at least a portion of detected light (e.g., optical output dataset 112a as described above with respect to Fig. 1).

Some embodiments may include illuminating at least a portion of the measurement volume with one or more light beams from at least one light source, detecting, by the at least one optical detector, the light from the measurement volume, and genrating, by the at least one optical detector, an optical output dataset based on at least a portion of detected light (e.g., light beams 114a from light source 114 as described above with respect to Fig. 1).

The optical output dataset may, for example, include one or more optical output subsets of data values. In some embodiments, the data values of at least one of the optical output subset(s) may be indicative of one or more activities in the measurement volume. For example, the one or more activities may be one or more human-induced activities, one or more machine-induced activities, one or more environmental activities, etc. The one or more human-induced activities may, for example, include running, walking, kneeling, shooting, etc. The one or more machine -induced activities may, for example, include power tool operation, etc. The one or more environmental activities may, for example, include traffic, winds, etc.

In some embodiments, the data values of at least one of the optical output subset(s) may be indicative of a presence of one or more subjects in the measurement volume.

In some embodiments, the data values of at least one of the optical output subset(s) may be indicative of one or more biometrically identifiable activities of at least one of the subject(s) in the measurement volume. The one or more biometrically identifiable activities may, for example, include a voice activity, respiratory activity, cardiac activity, etc.

Some embodiments may include receiving 206, by at least one electromagnetic (EM) receiver, one or more EM signals from the measurement volume (e.g., EM receiver 122 as described above with respect to Fig. 1). For example, the EM receiver may be a radiofrequency (RF) receiver capable of receiving RF signals. Some embodiments may include generating 208, by the at least one EM receiver, an EM output dataset based on at least a portion of the one or more received EM signals (e.g., EM output dataset 122a as described above with respect to Fig. 1).

Some embodiments may include transmitting, by at least one EM transmitter, one or more transmitted EM signal into at least a portion of the measurement volume 90, receiving, by the at least one EM receiver, the one or more EM signals from the measurement volume, and generating, by the at least one EM receiver, the EM output dataset based on at least a portion of the one or more received EM signals (e.g., EM signals 124a transmitted by EM transmitter 124 as described above with respect to Fig. 1). The EM output dataset may, for example, include one or more EM output subsets of data values. In some embodiments, the data values of at least one of the EM output subset(s) may be indicative of one or more activities in the measurement volume (e.g., one or more human-induced activities, one or more machine-induced activities, one or more environmental activities, etc., as described above).

In some embodiments, the data values of at least one of the EM output subset(s) may be indicative of a presence of one or more subjects in the measurement volume (e.g., as described above).

In some embodiments, the data values of at least one of the EM output subset(s) may be indicative of one or more biometrically identifiable activities of at least one of the subject(s) in the measurement volume (e.g., voice activity, respiratory activity, cardiac activity, etc., as described above).

Some embodiments may include determining 210, by a processing unit, a multiparameter output dataset based on at least a portion of the optical output dataset and at least a portion of the EM output dataset (e.g., multiparameter output dataset 132a as described above with respect to Fig. 1). The multiparameter output dataset may, for example, include one or more multiparameter output subsets of data values. In some embodiments, the data values of at least one of the multiparameter output subset(s) may be indicative of one or more activities in measurement volume 90 (e.g., one or more human-induced activities, one or more machine -induced activities, one or more environmental activities, etc., as described above).

In some embodiments, the data values of at least one of the multiparameter output subset(s) may be indicative of a presence of one or more subjects in the measurement volume (e.g., as described above).

In some embodiments, the data values of at least one of the multiparameter output subset(s) may be indicative of one or more biometrically identifiable activities of at least one of the subject(s) in the measurement volume (e.g., voice activity, respiratory activity, cardiac activity, etc., as described above).

Some embodiments may include determining, by the processing unit, the multiparameter output dataset by fusing at least a portion of the optical output dataset with at least a portion of the EM output dataset. Some embodiments may include fusing at least a portion of optical the output dataset with at least a portion of the EM output dataset using one or more machine learning algorithms to determine the multiparameter output dataset Some embodiments may include fusing at least a portion of the optical output dataset with at least a portion of the EM output dataset using one or more deep learning algorithms to determine the multiparameter output dataset. Some embodiments may include fusing at least a portion of a weighted optical output dataset with at least a portion of weighted EM output dataset to the determine multiparameter output dataset.

Some embodiments may include determining, by the processing unit, a contribution of each of the optical output dataset and the EM output dataset to the multiparameter output dataset based on at least one of a distance of the at least one optical detector and the at least one EM receiver to the measurement volume, environmental conditions, a line of sight (LOS) of the at least one optical detector to the measurement volume, and a signal-to-noise ratio (SNR) of the optical output dataset and the EM output dataset (e.g., as described above with respect to Fig. 1).

For example, the farther the at least one optical detector from the measurement volume, the greater the contribution of the optical output dataset to the multiparameter output dataset than the contribution of the EM output dataset. In another example, the worse the environmental conditions (e.g., raining, foggy weather, etc.), the greater the contribution of the EM output dataset to the multiparameter output dataset than the contribution of the optical output dataset. In another example, the more conclusive the LOS of the at least one optical detector to the measurement volume, the greater the contribution of the optical output dataset to the multiparameter output dataset than the contribution of the optical sensing unit. In another example, the higher the SNR of one of the optical output dataset and the EM output dataset, the greater the contribution thereof to the multiparameter output dataset.

Some embodiments may include detecting, by the processing unit, one or more activities in the measurement volume based on at least a portion of the multiparameter output dataset. Some embodiments may include classifying, by the processing unit, one or more activities in the measurement volume based on at least a portion of the multiparameter dataset.

Some embodiments may include classifying one or more activities in the measurement volume based on at least a portion of the multiparameter dataset and a reference multiparameter (e.g., reference multiparameter dataset 142 as described above with respect to Fig. 1). For example, the one or more activities may be one or more human-induced activities, one or more machine induced activities, one or more environmental activities, etc. (e.g., as described hereinabove). The reference multiparameter dataset may, for example, include multiple reference sets of data values. The reference sets may be accompanied with metadata describing the activities for which the respective sets have been obtained. For example, the reference multiparameter dataset may include one or more first reference optical sets and one or more first reference EM sets obtained for a first activity (e.g., walking of one or more subjects). In another example, the reference multiparameter dataset may include one or more second reference optical sets and one or more second reference EM sets obtained for a second activity (e.g., operation of a power tool). Some embodiments may include comparing at least a portion of the multiparameter output dataset with at least a portion of the reference sets of the reference multiparameter dataset and classifying at least one of the one or more activities in the measurement volume based on the comparison thereof.

Various embodiments may include detecting and/or classifying one or more activities in the measurement volume based on at least a portion of the multiparameter output dataset and using one or more artificial intelligence algorithms. For example, some embodiments may include detecting and/or classifying the one or more activities in the measurement volume using one or more machine learning algorithms and/or using one or more deep learning algorithms (e.g., such as supervise, supervised or reinforcement learning machine learning classifier algorithms).

Some embodiments may include detecting, by the processing unit, one or more subjects in the measurement volume.

Some embodiments may include detecting, by the processing unit, one or more subjects in the measurement volume based on at least a portion of the multiparameter output dataset and at least a portion of the reference multiparameter dataset. For example, the reference multiparameter dataset may include one or more third reference optical sets and one or more third reference EM sets obtained for the measurement volume having one or more subjects. Some embodiments may include comparing at least a portion of the multiparameter output dataset with at least a portion of the reference sets of the reference multiparameter dataset and detecting the presence of one or more subjects in the measurement volume based on the comparison thereof.

Some embodiments may include detecting, by the processing unit, one or more subjects in the measurement volume based on at least a portion of the multiparameter output dataset and using one or more artificial intelligence algorithms (e.g., one or more machine learning algorithms and/or one or more deep learning algorithms).

Some embodiments may include determining, by the processing unit, that there are two or more subjects within the measurement volume based on at least a portion of the multiparameter output dataset. For example, some embodiments may include comparing at least some of the multiparameter output subsets of the multiparameter output dataset and determining based on the comparison thereof that at least some of the multiparameter output subsets thereof relate to different subjects. Some embodiments may include determining, by the processing unit, a number of the subjects in the measurement volume. Some embodiments may include identifying, by the processing unit, at least one subejct of one or more subjects in the measurement volume based on at least a portion of the multiparameter output dataset.

Some embodiments may include detecting, by the processing unit, one or more biometrically identifiable activities of at least one subejct of the one or more subjects in the measurement volume based on at least a portion of the multiparameter output dataset. The one or more biometrically identifiable activities may, for example, include a voice activity, respiratory activity, cardiac activity, etc. Some embodiments may include identifying, by the processing unit, the at least one subejct of the one or more subjects in the measurement volume based on at least one of the one or more biometrically identifiable activities detected for the respective at least one subject.

Some embodiments may include detecting, by the processing unit, one or more biometrically identifiable activities of at least one subejct of the one or more subjects in the measurement volume abased on at least a portion of the multiparameter output dataset and at least a portion of the reference multiparameter dataset. For example, the reference multiparameter dataset may include one or more fourth reference optical sets and one or more fourth reference EM sets obtained for a first biometrically identifiable activity (e.g., cardiac activity). Some embodiments may include comparing, by the processing unit, at least a portion of the multiparameter output dataset with at least a portion of the reference sets of the reference multiparameter dataset and detecting the first biometrically identifiable activity in the measurement volume.

Some embodiments may include detecting, by the processing unit, one or more biometrically identifiable activities of at least one subejct of the one or more subjects in the measurement volume based on at least a portion of the multiparameter output dataset and using one or more artificial intelligence algorithms.

Some embodiments may include identifying, by the processing unit, at least one subject of the one or more subjects in the measurement volume based on at least one of the one or more biometrically identifiable activities detected for the respective at least one subject and based on at least a portion of a reference biometric multiparameter dataset (e.g., reference biometric multiparameter dataset 144 as described above with respect to Fig. 1). For example, the reference biometric multiparameter dataset may include multiple reference biometric sets of data values. The reference biometric sets may be accompanied with metadata describing the biometrically identifiable activities and identification data of subjects for which the respective sets have been obtained. For example, the reference biometric multiparameter dataset may include one or more first biometric reference optical sets and one or more first reference biometric EM sets obtained for the first biometrically identifiable activity (e.g., cardiac activity) of a first subject. In another example, the reference biometric multiparameter dataset may include one or more second reference biometric optical sets and one or more second reference EM biometric sets obtained for a second biometrically identifiable activity (e.g., voice activity) of a second subject. Some embodiments may include comparing, by the processing unit, at least a portion of the multiparameter output dataset with at least a portion of the reference sets of the reference multiparameter dataset and identifying the at least one subejct of the one or more subjects based on the comparison thereof.

Some embodiments may include identifying, by the processing unit, at least one subejct of the one or more subjects in the measurement volume based on at least one of the one or more biometrically identifiable activities detected for the respective at least one subejct and using one or more artificial intelligence algorithms.

Some embodiments may include generating, by the processing unit, one or more notifications concerning at least one of the one or more detected activities in the measurement volume, the presence of the one or more subjects in the measurement volume, the number of subjects in the measurement volume and the identity of at least one of the one or more subjects in the measurement volume, from direct measurement of the activities. For example, if a person is walking, then monitoring the movement within the room can be regarded as a direct measurement of the walking person.

Some embodiments may include generating, by the processing unit, one or more notifications concerning at least one of the one or more detected activities in the measurement volume, the presence of the one or more subjects in the measurement volume, the number of subjects in the measurement volume and the identity of at least one of the one or more subjects in the measurement volume from indirect measurements of the activities. For example, if a person is walking, then acoustic waves propagated through the floor and the walls can be regarded as indirect measurement of the walking person.

Aspects of the present invention are described above with reference to flowchart illustrations and/or portion diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each portion of the flowchart illustrations and/or portion diagrams, and combinations of portions in the flowchart illustrations and/or portion diagrams, can be implemented by computer program instructions. These computer program instructions can be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or portion diagram or portions thereof.

These computer program instructions can also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or portion diagram portion or portions thereof. The computer program instructions can also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or portion diagram portion or portions thereof.

The aforementioned flowchart and diagrams illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each portion in the flowchart or portion diagrams can represent a module, segment, or portion of code, which includes one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the portion can occur out of the order noted in the figures. For example, two portions shown in succession can, in fact, be executed substantially concurrently, or the portions can sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each portion of the portion diagrams and/or flowchart illustration, and combinations of portions in the portion diagrams and/or flowchart illustration, can be implemented by special purpose hardware -based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

In the above description, an embodiment is an example or implementation of the invention. The various appearances of "one embodiment”, "an embodiment", "certain embodiments" or "some embodiments" do not necessarily all refer to the same embodiments. Although various features of the invention can be described in the context of a single embodiment, the features can also be provided separately or in any suitable combination. Conversely, although the invention can be described herein in the context of separate embodiments for clarity, the invention can also be implemented in a single embodiment. Certain embodiments of the invention can include features from different embodiments disclosed above, and certain embodiments can incorporate elements from other embodiments disclosed above. The disclosure of elements of the invention in the context of a specific embodiment is not to be taken as limiting their use in the specific embodiment alone. Furthermore, it is to be understood that the invention can be carried out or practiced in various ways and that the invention can be implemented in certain embodiments other than the ones outlined in the description above.

The invention is not limited to those diagrams or to the corresponding descriptions. For example, flow need not move through each illustrated box or state, or in exactly the same order as illustrated and described. Meanings of technical and scientific terms used herein are to be commonly understood as by one of ordinary skill in the art to which the invention belongs, unless otherwise defined. While the invention has been described with respect to a limited number of embodiments, these should not be construed as limitations on the scope of the invention, but rather as exemplifications of some of the preferred embodiments. Other possible variations, modifications, and applications are also within the scope of the invention. Accordingly, the scope of the invention should not be limited by what has thus far been described, but by the appended claims and their legal equivalents.