Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SMARTGLASSES FOR DETECTING PHYSIOLOGICAL PARAMETERS
Document Type and Number:
WIPO Patent Application WO/2021/084488
Kind Code:
A1
Abstract:
Some aspects of this disclosure involve head-mounted systems, such as smartglasses, configured to perform one or more of the following: utilize correlations between PPG signals and iPPG signals to improve detection of physiological responses, monitor blood glucose level, detect fever from images and temperatures, detect congestive heart failure, detect respiratory tract infection based on changes in coughing sounds, and enable adjustment of the smartglasses' frame to improve the smartglasses' fit.

Inventors:
FRANK ARI M (IL)
TZVIELI ORI (US)
TZVIELI ARIE (US)
THIEBERGER GIL (IL)
Application Number:
PCT/IB2020/060196
Publication Date:
May 06, 2021
Filing Date:
October 30, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
FACENSE LTD (IL)
International Classes:
A61B5/00; A61B5/01; A61B5/0205; A61B5/1455
Domestic Patent References:
WO2019181150A12019-09-26
Foreign References:
US20190313915A12019-10-17
US20170231490A12017-08-17
US20090182688A12009-07-16
US20170095157A12017-04-06
US20170367651A12017-12-28
Attorney, Agent or Firm:
ACTIVE KNOWLEDGE LTD. (IL)
Download PDF:
Claims:
WE CLAIM

1. A system configured to calculate blood glucose level, comprising: a head-mounted contact photoplethysmography device configured to measure a signal indicative of a photoplethysmogram signal (PPG signal) at a first region comprising skin on a user’s head; a head-mounted camera configured to capture images of a second region comprising skin on the user’s head; and a computer configured to: identify, based on the PPG signal, times of systolic notches and times of systolic peaks; and calculate the blood glucose level based on differences between a first subset of the images taken during the times of systolic notches and a second subset of the images taken during the times of systolic peaks.

2. A system configured to detect fever, comprising: a first head-mounted temperature sensor configured to measure skin temperature (TSkin) at a first region on a user’s head; a second head-mounted temperature sensor configured to measure temperature of the environment (Tenv); and a computer configured to: receive images of a second region on the user’s face, captured by a camera sensitive to wavelengths below 1050 nanometer; calculate, based on the images, values indicative of hemoglobin concentrations at three or more regions on the user’s face; and detect whether the user has a fever based on TSkin, Tenv , and the values.

3. A system comprising: a head-mounted contact photoplethysmography device configured to measure a signal indicative of photoplethysmogram signal (PPG signal) at a first region that comprises exposed skin on a user’s head; an inward-facing head-mounted camera configured to capture images of a second region that comprises exposed skin on the user’s head; wherein the camera is not in physical contact with the second region; and a computer configured to detect a physiological response based on: (i) imaging photoplethysmogram signals (iPPG signals) recognizable in the images, and (ii) correlations between the PPG signal and the iPPG signals.

4. A system configured to calculate extent of congestive heart failure (CHF), comprising: smartglasses configured to be worn on a user’s head; an inward-facing camera, physically coupled to the smartglasses, configured to capture images of an area comprising skin on the user’s head; wherein the area is larger than 4 cmA2, and the camera is mounted more than 5 mm away from the user’s head; a sensor, physically coupled to the smartglasses, configured to measure a signal indicative of a respiration rate of the user; and a computer configured to calculate the extent of CHF based on: a facial blood flow pattern recognizable in the images, and the respiration rate of the user recognizable in the signal.

5. A system configured to detect a respiratory tract infection (RTI) based on changes in coughing sounds, comprising: a wearable ambulatory system comprising the following sensors: (i) a head-mounted acoustic sensor configured to be mounted at a fixed position relative to a user’s head, and to take audio recordings comprising coughing sounds of the user; and (ii) a head-mounted movement sensor configured to measure a signal indicative of movements of the user’s head (movement signal); and a computer configured to: (i) receive current measurements of the user, taken with the sensors while the movement signal was indicative of head movements that characterize coughing; (ii) receive earlier measurements of the user, taken with the sensors at least four hours before the current measurements, while the user had a known extent of the RTI and while the movement signal was indicative of head movements that characterize coughing; and (iii) detect a change relative to the known extent of the RTI based on a difference between the current measurements and the earlier measurements.

6. Smartglasses comprising: a front element configured to support lenses; two temples coupled to the front element; each temple comprising: a first portion, coupled to the front element, comprising first electronic components; a second portion, coupled to the first portion, comprising electric wires; a third portion, coupled to the second portion, comprising second electronic components; wherein the second portion is designed to be bent around a human ear to improve the smartglasses’ fit, and the first and third portions are not designed to be bent to improve the smartglasses’ fit.

Description:
SMARTGLASSES FOR DETECTING PHYSIOLOGICAL PARAMETERS

CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] This Application claims priority to U.S. Provisional Patent Application No. 62/928,726, filed Oct 31, 2019, U.S. Provisional Patent Application No. 62/945,141, filed Dec 7, 2019, U.S. Provisional Patent Application No. 62/960,913, filed Jan 14, 2020, U.S. Provisional Patent Application No. 63/006,827, filed Apr 8, 2020, U.S. Provisional Patent Application No. 63/024,471, filed May 13, 2020, and U.S. Provisional Patent Application No. 63/048,638, filed July 6, 2020.

TECHNICAL FIELD

[0002] This application relates to head-mounted systems, such as smartglasses, configured to detect physiological parameters.

ACKNOWLEDGMENTS

[0003] Gil Thieberger would like to thank his holy and beloved teacher, Uama Dvora-hla, for her extraordinary teachings and manifestation of wisdom, love, compassion and morality, and for her endless efforts, support, and skills in guiding him and others on their paths to freedom and ultimate happiness. Gil would also like to thank his beloved parents for raising him with love and care.

BACKGROUND

[0004] Monitoring and analyzing measurements of the face are useful for many health-related and life logging related applications. However, collecting such data over time when people are going through their daily activities can be very difficult, and the measurements may be affected by various confounding factors due to the uncontrolled settings. Therefore, there is a need to detect physiological parameters at various regions of a person’s face, over long periods of time, while the user performs various day-to-day activities in uncontrolled settings.

SUMMARY

[0005] Some aspects of this disclosure involve head-mounted systems, such as smartglasses, configured to perform one or more of the following: utilize correlations between PPG signals and iPPG signals to improve detection of physiological responses, monitor blood glucose level, detect fever from images and temperatures, detect congestive heart failure, detect respiratory tract infection based on changes in coughing sounds, and enable adjustment of the smartglasses’ frame to improve the smartglasses’ fit.

BRIEF DESCRIPTION OF THE DRAWINGS

[0006] The embodiments are herein described by way of example only, with reference to the following drawings:

[0007] FIG. 1 is a schematic illustration of a system to detect a physiological response using multiple PPG signals measured by different types of sensors;

[0008] FIG. 2 and FIG. 3 illustrate embodiments of smartglasses with sensors;

[0009] FIG. 4 is a schematic illustration of a system to calculate blood glucose levels;

[0010] FIG. 5 illustrates selecting images based on times of systolic notches and peaks identified in the PPG signal;

[0011] FIG. 6 is a schematic illustration of a system to detect fever and/or intoxication;

[0012] FIG. 7A and FIG. 7B illustrate examples hemoglobin concentration patterns of a sober and intoxicated person;

[0013] FIG. 8 and FIG. 9 illustrate systems to calculate an extent of CHF;

[0014] FIG. 10 and FIG. 11 illustrate systems to monitor a user’s coughing;

[0015] FIG. 12 and FIG. 13 illustrate embodiments of smartglasses designed to be bent around a human ear to improve the smartglasses’ fit to the wearer; and

[0016] FIG. 14 and FIG. 15 are schematic illustrations of possible embodiments for computers.

DETAILED DESCRIPTION

[0017] Herein the terms “photoplethysmogram signal”, “photoplethysmographic signal”, “photoplethysmography signal”, and other similar variations are interchangeable and refer to the same type of signal. A photoplethysmogram signal may be referred to as a “PPG signal”, or an “iPPG signal” when specifically referring to a PPG signal obtained from a camera. The terms “photoplethysmography device”, “photoplethysmographic device”, “photoplethysmogram device”, and other similar variations are also interchangeable and refer to the same type of device that measures a signal from which it is possible to extract the photoplethysmogram signal. The photoplethysmography device may be referred to as “PPG device”.

[0018] Sentences in the form of “a sensor configured to measure a signal indicative of a photoplethysmogram signal” refer to at least one of: (i) a contact PPG device, such as a pulse oximeter that illuminates the skin and measures changes in light absorption, where the changes in light absorption are indicative of the PPG signal, and (ii) a non-contact camera that captures images of the skin, where a computer extracts the PPG signal from the images using an imaging photoplethysmography (iPPG) technique. Other names known in the art for iPPG include: remote photoplethysmography (rPPG), remote photoplethysmographic imaging, remote imaging photoplethysmography, remote -PPG, and multi-site photoplethysmography (MPPG).

[0019] Analysis of PPG signals usually includes the following steps: filtration of a PPG signal (such as applying bandpass filtering and/or heuristic filtering), extraction of feature values from fiducial points in the PPG signal (and in some cases may also include extraction of feature values from non-fiducial points in the PPG signal), and analysis of the feature values.

[0020] Examples of features that can be extracted from the PPG signal, together with schematic illustrations of the feature locations on the PPG signal, can be found in the following three publications: (i) Peltokangas, Mikko, et al. "Parameters extracted from arterial pulse waves as markers of atherosclerotic changes: performance and repeatability." IEEE journal of biomedical and health informatics 22.3 (2017): 750-757; (ii) Ahn, Jae Mok. "New aging index using signal features of both photoplethysmograms and acceleration plethysmograms." Healthcare informatics research 23.1 (2017): 53-59; (iii) Charlton, Peter H., et al. "Assessing mental stress from the photoplethysmogram: a numerical study." Physiological measurement 39.5 (2018): 054001, and (iv) Peralta, Elena, et al. "Optimal fiducial points for pulse rate variability analysis from forehead and finger photoplethysmographic signals." Physiological measurement 40.2 (2019): 025007. Although these references describe manual feature selection, the features may be selected using any appropriate feature engineering technique, including using automated feature engineering tools.

[0021] Unless there is a specific reference to a specific derivative of the PPG signal, phrases of the form of “based on the PPG signal” refer to the PPG signal and any derivative thereof. Algorithms for filtration of the PPG signal (and/or the images in the case of iPPG), extraction of feature values from fiducial points in the PPG signal, and analysis of the feature values extracted from the PPG signal are well known in the art, and can be found for example in the following references: (i) Allen, John. "Photoplethysmography and its application in clinical physiological measurement." Physiological measurement 28.3 (2007); (ii) Elgendi, Mohamed. "On the analysis of fingertip photoplethysmogram signals." Current cardiology reviews 8.1 (2012); (iii) Holton, Benjamin D., et al. "Signal recovery in imaging photoplethysmography." Physiological measurement 34.11 (2013), (iv) Sun, Yu, and Nitish Thakor. "Photoplethysmography revisited: from contact to noncontact, from point to imaging." IEEE Transactions on Biomedical Engineering 63.3 (2015), (v) Kumar, Mayank, Ashok Veeraraghavan, and Ashutosh Sabharwal. "DistancePPG: Robust non -contact vital signs monitoring using a camera." Biomedical optics express 6.5 (2015), and (vi) Wang, Wenjin, et al. "Algorithmic principles of remote PPG." IEEE Transactions on Biomedical Engineering 64.7 (2016).

[0022] In the case of iPPG, the input comprises images having multiple pixels. The images from which the iPPG signal and/or hemoglobin concentration patterns are extracted may undergo various preprocessing to improve the signal, such as color space transformation, blind source separation using algorithms such as independent component analysis (ICA) or principal component analysis (PCA), and various filtering techniques, such as detrending, bandpass filtering, and/or continuous wavelet transform (CWT). Various preprocessing techniques known in the art that may assist in extracting iPPG signals from images are discussed in Zaunseder et al. (2018), “Cardiovascular assessment by imaging photoplethysmography - a review”, Biomedical Engineering 63(5), 617-634.

[0023] Various embodiments described herein involve calculations based on machine learning approaches. Herein, the terms “machine learning approach” and/or “machine learning-based approaches” refer to learning from examples using one or more approaches. Examples of machine learning approaches include: decision tree learning, association rule learning, regression models, nearest neighbors classifiers, artificial neural networks, deep learning, inductive logic programming, support vector machines, clustering, Bayesian networks, reinforcement learning, representation learning, similarity and metric learning, sparse dictionary learning, genetic algorithms, rule-based machine learning, and/or learning classifier systems. Herein, a “machine learning -based model” is a model trained using one or more machine learning approaches.

[0024] Herein, “feature values” (also known as feature vector, feature data, and numerical features) may be considered input to a computer that utilizes a model to perform the calculation of a value based on the input. In addition to feature values generated based on measurements taken by sensors mentioned in a specific embodiment, at least some feature values utilized by a computer of the specific embodiment may be generated based on additional sources of data that were not specifically mentioned in the specific embodiment. Some examples of such additional sources of data include: contextual information, information about the user being, measurements of the environment, and values of physiological signals of the user obtained by other sensors.

[0025] Sentences in the form of “inward-facing head-mounted camera” refer to a camera configured to be worn on a user’s head and to remain pointed at its region of interest (ROI). It is noted that the elliptic and other shapes of the ROIs in some of the drawings are just for illustration purposes, and the actual shapes of the ROIs are usually not as illustrated.

[0026] Various embodiments described herein involve a head-mounted system that may be connected, using wires and/or wirelessly, with a device carried by the user and or a non-wearable device. The head- mounted system may include a battery, a computer, sensors, and a transceiver.

[0027] FIG. 14 and FIG. 15 are schematic illustrations of possible embodiments for computers (400, 410) that are able to realize one or more of the embodiments discussed herein that include a “computer”. The computer (400, 410) may be implemented in various ways, such as, but not limited to, a microcontroller, a computer on a chip, a system-on-chip (SoC), a system-on-module (SoM), a processor with its required peripherals, a server computer, and/or any other computer form capable of executing a set of computer instructions. Further, references to a computer or a processor include any collection of one or more computers and/or processors (which may be at different locations) that individually or jointly execute one or more sets of computer instructions. This means that the singular term “computer” is intended to imply one or more computers, which jointly perform the functions attributed to “the computer”. In particular, some functions attributed to the computer may be performed by a computer on a wearable device (e.g., smartglasses) and/or a computer of the user (e.g., smartphone), while other functions may be performed on a remote computer, such as a cloud-based server.

[0028] The computer 400 includes one or more of the following components: processor 401, memory 402, computer readable medium 403, user interface 404, communication interface 405, and bus 406. The computer 410 includes one or more of the following components: processor 411, memory 412, and communication interface 413.

[0029] At least some of the methods described herein are “computer-implemented methods” that are implemented on a computer, such as the computer (400, 410), by executing instructions on the processor (401, 411). Additionally, at least some of these instructions may be stored on a non -transitory computer- readable medium.

[0030] As used herein, references to "one embodiment" (and its variations) mean that the feature being referred to may be included in at least one embodiment of the invention. Separate references to embodiments may refer to the same embodiment and/or different embodiments.

[0031] Sentences in the form of “X is indicative of Y” mean that X includes information correlated with Y, up to the case where X equals Y. The word “most” of something is defined as above 51% of the something (including 100% of the something). Both a “portion” of something and a “region” of something refer to a value between a fraction of the something and 100% of the something. The word “region” refers to an open-ended claim language, and a camera said to capture a specific region on the face may capture just a small part of the specific region, the entire specific region, and/or a portion of the specific region together with additional region(s). The phrase “based on” indicates an open-ended claim language, and is to be interpreted as “based, at least in part, on”. The terms “first”, “second” and so forth are to be interpreted merely as ordinal designations, and shall not be limited in themselves.

[0032] The embodiments of the invention may include any variety of combinations and/or integrations of the features of the embodiments described herein. Although some embodiments may depict serial operations, the embodiments may perform certain operations in parallel and/or in different orders from those depicted. Embodiments described in conjunction with specific examples are presented by way of example, and not limitation.

[0033] Utilizing correlations between PPG signals and iPPG signals to improve detection of physiological responses.

[0034] A photoplethysmogram signal (PPG signal) is an optically obtained plethysmogram that is indicative of blood volume changes in the microvascular bed of tissue. A PPG signal is often obtained by using a pulse oximeter, which illuminates the skin and measures changes in light absorption. Another possibility for obtaining the PPG signal is using an imaging photoplethysmography (iPPG) device. As opposed to typical PPG devices, which usually come in contact with the skin, iPPG does not require contact with the skin and is obtained by a non -contact sensor, such as a video camera. iPPG has emerged as a promising technology for detecting physiological signals and various physiological responses whose manifestation involves changes to facial blood flow (e.g., an allergic reaction, a stroke, a migraine, stress, certain emotional responses, manifestation of pain, and blood pressure pulse waves). However, due to the fact that iPPG signals are typically acquired using non-contact cameras, the iPPG signals are often noisy and greatly affected by confounding factors such as illumination from the environment and body movement. Thus, there is a need for a way to increase accuracy of iPPG signals in order to improve the ability to utilize iPPG signals to detect physiological responses.

[0035] This paragraph discloses claims that the Applicant may file in a divisional patent application. The order of the optional dependent claims below is not limiting, and the dependent claims may be arranged according to any order and multiple dependencies. 1. A system comprising: a head-mounted contact photoplethysmography device configured to measure a signal indicative of photoplethysmogram signal (PPG signal) at a first region that comprises exposed skin on a user’s head; an inward-facing head- mounted camera configured to capture images of a second region that comprises exposed skin on the user’s head; wherein the camera is not in physical contact with the second region; and a computer configured to detect a physiological response based on: (i) imaging photoplethysmogram signals (iPPG signals) recognizable in the images, and (ii) correlations between the PPG signal and the iPPG signals. 2. The system of claim 1, wherein the camera and the contact photoplethysmography device are physically coupled to smartglasses or to a smart-helmet, which is designed to measure the user in day-to-day activities over a duration of months to years; and wherein the computer is further configured to extract the iPPG signals from the images based on values of time -segments in which the iPPG signals were expected to appear as a function of locations of respective regions of the iPPG signals relative to location of the contact PPG device. 3. The system of claim 1, wherein the camera and the contact photoplethysmography device are physically coupled to smartglasses or to a smart -helmet, which is designed to measure the user in day-to-day activities over a duration of months to years; and wherein the computer is further configured to extract from the PPG signal at least one of the following parameters: systolic peak, dicrotic notch, diastolic peak, interbeat interval, and systolic -diastolic peak-to-peak time; and the computer is further configured to utilize at least one of the parameters to correlate between the PPG signal and the iPPG signals. 4. The system of claim 1, wherein the camera and the contact photoplethysmography device are physically coupled to smartglasses or to a smart -helmet, which is designed to measure the user in day- to-day activities over a duration of months to years; and wherein the computer is further configured to extract from the PPG signal times corresponding to at least one of the following types of fiducial points: systolic peaks, dicrotic notches, and diastolic peaks; and the computer is further configured to utilize the times to extract corresponding fiducial points in the iPPG signals by adding to the times an offset in order to determine when the corresponding fiducial points manifest in the iPPG signals. 5. The system of claim 1, wherein the computer is further configured to identify times at which fiducial points appear in the PPG signal; calculate, based on the times, time-segments in which the fiducial points are expected to appear in the iPPG signals; and detect a physiological response based on values of the iPPG signals during the time- segments. 6. The system of any one of claims 1 to 5, wherein the iPPG signals comprise multiple values for different sub-regions of the second region, and the physiological response is detected based on differences between amplitudes of the values recognizable in the different sub -regions of the second region; and wherein the physiological response is indicative of an allergic reaction, and the sub-regions of the second region comprise portions of at least two of the following areas on the user’s face: nose, upper lip, lips, cheeks, temples, periorbital area around the eyes, and the forehead. 7. The system of any one of claims 1 to 5, wherein the iPPG signals comprise multiple values for different sub -regions of the second region, and the physiological response is detected based on differences between amplitudes of the values recognizable in the different sub-regions of the second region; and wherein the physiological response is indicative of a stroke, and the sub-regions of the second region comprise at least one of the following pairs on the user’s face: left and right cheeks, left and right temples, left and right sides of the forehead, and left and right sides of the periorbital area around the eyes. 8. The system of any one of claims 1 to 5, wherein the iPPG signals comprise multiple values for different sub-regions of the second region, and the physiological response is detected based on differences between amplitudes of the values recognizable in the different sub-regions of the second region; and wherein the physiological response is indicative of a migraine, and the sub-regions of the second region comprise at least one of the following pairs on the user’s face: left and right sides of the forehead, left and right temples, left and right sides of the periorbital area around the eyes, and left and right cheeks. 9. The system of any one of claims 1 to 5, wherein the iPPG signals comprise multiple values for different sub -regions of the second region, and the physiological response is detected based on differences between amplitudes of the values recognizable in the different sub-regions of the second region; and wherein the physiological response is indicative of a blood pressure value that is calculated based on differences in pulse transit times detectable in the sub- regions of the second region; wherein the sub -regions comprise at least two of the following areas on the user’s face: left temple, right temple, left side of the forehead, right side of the forehead, left check, right cheek, nose, periorbital area around the left eye, and periorbital area around the right eye. 10. The system of any one of claims 1 to 5, wherein the iPPG signals comprise multiple values for different sub-regions of the second region, and the physiological response is detected based on differences between amplitudes of the values recognizable in the different sub -regions of the second region; and wherein the physiological response is indicative of at least one of stress, emotional response, and pain, which are calculated based on changes to hemoglobin concentrations observable in the iPPG signals relative to previous measurements of hemoglobin concentrations observable in the iPPG signals of the user; wherein the sub- regions comprise at least two of the following areas on the user’s face: lips, upper lip, chin, left temple, right temple, left side of the forehead, right side of the forehead, left check, right cheek, left ear lobe, right ear lobe, nose, periorbital area around the left eye, and periorbital area around the right eye. 11. A method for calculating emotional response, comprising: measuring a signal indicative of a photoplethysmogram signal (PPG signal) at a first region comprising exposed skin on a user’s head utilizing a head -mounted contact photoplethysmography device; capturing images of a second region comprising exposed skin on the user’s head utilizing an inward-facing head -mounted; utilizing correlations between the PPG signal and imaging photoplethysmogram signals (iPPG signals) recognizable in the images to detect emotional response of the user based on changes to hemoglobin concentrations observable in the iPPG signals relative to previous measurements of hemoglobin concentrations observable in the iPPG signals of the user; wherein the iPPG signals comprise multiple values for different sub -regions of the second region, and the sub-regions comprise at least two of the following areas on the user’s face: lips, upper lip, chin, left temple, right temple, left side of the forehead, right side of the forehead, left check, right cheek, and periorbital areas around the eyes.

[0036] FIG. 1 illustrates one embodiment of a system that utilizes multiple PPG signals, measured by different types of sensors, to detect a physiological response. In one embodiment, the system includes at least a head-mounted contact PPG device 782, an inward-facing head-mounted camera 784, and a computer 780 that may or may not be hear-mounted.

[0037] In one embodiment, the head -mounted contact PPG device 782 measures a signal indicative of a PPG signal 783 at a first region of interest (ROIi) on a user’s body. ROIi includes a region of exposed skin on the user’s head, and the PPG device 782 includes one or more light sources configured to illuminate ROR, and one or more photodetectors configured to detect extents of reflections from ROR. [0038] The camera 784 captures images 785 of a second region of interest (ROI 2 ) on the user’s head. Optionally, the camera is located more than 10 mm away from the user’s head. The camera 784 may or may not utilize a light source to illuminate ROI 2 . References to the camera 784 may involve more than one camera. Optionally, the camera 784 may refer to two or more inward-facing head-mounted cameras, and ROI 2 includes two or more regions on the user’s head that are respectively captured by the two or more inward-facing head-mounted cameras. Optionally, ROI 2 covers a larger area of exposed skin than ROIi. In one example, the area of ROI 2 is at least ten times larger than the area of ROR. In one example, the PPG device 782 does not obstruct the field of view of the camera 784 to ROI 2 .

[0039] The PPG device 782, the camera 784 are physically coupled to a frame of smartglasses or to a smart-helmet, which is designed to measure the user in day-to-day activities, over duration of weeks, months, and/or years. FIG. 2 and FIG. 3 illustrate smartglasses that may be utilized to realize the invention described herein.

[0040] FIG. 2 illustrates smartglasses that include camera 796 and several contact PPG devices. The contact PPG devices correspond to the PPG device 782 and are used to measure the PPG signal 783. The contact PPG devices may be coupled at various locations on the frame 794, and thus may come in contact with various regions on the user’s head. For example, contact PPG device 791a is located on the right temple tip, which brings it to contact with a region behind the user’s ear (when the user wears the smartglasses). Contact PPG device 791b is located on the right temple of the frame 794, which puts it in contact with a region on the user’s right temple. It is to be noted that in some embodiments, in order to bring the contact PPG device close such that it touches the skin, various apparatuses may be utilized, such as spacers (e.g., made from rubber or plastic), and/or adjustable inserts that can help bridge a possible gap between the frame’s temple and the user’s face. Such an apparatus is spacer 792 which brings contact PPG device 791b in contact with the user’s temple when the user wears the smartglasses. Another possible location for a contact PPG device is the nose bridge, as contact PPG device 791c is illustrated in the figure. It is to be noted the contact PPG device 791c may be embedded in the nose bridge (or one of its components), and/or physically coupled to a part of the nose bridge.

[0041] FIG. 3 illustrates smartglasses that include at least first, second, and third inward-facing cameras, each of which may correspond to the camera 784. The figure illustrates a frame 797 to which a first inward-facing camera 795a is coupled above the lens that is in front of the right eye, a second inward-facing camera 795b that is coupled to the frame 797 above the lens that is in front of the left eye, and a third inward facing camera 795c that is on the left side of the frame 797 and configured to capture images of lower portions of the user’s face (e.g., portions of the left cheek). Contact PPG device 798 includes a leaf that brings LED 799a and sensor 799b in contact with the user’s temple when the user wears the smartglasses

[0042] The computer 780 is configured, in some embodiments, to detect a physiological response based on: (i) imaging photoplethysmogram signals (iPPG signals) recognizable in the images 785, and (ii) correlations between the PPG signal 783 and the iPPG signals. Some examples of physiological responses that may be detected include: an allergic reaction, a stroke, a migraine, stress, a certain emotional response, pain, and blood pressure value. Optionally, the computer 780 forwards an indication of a detection of the physiological response 789 to a device of the user and or to another computer system. Examples of computers that may be utilized to perform this detection are computer 400 or computer 410 illustrated in FIG. 14 and FIG. 15, respectively.

[0043] Herein, sentences of the form “iPPG signal recognizable in the images” refer to a signal indicative of effects of blood volume changes due to pulse waves that may be extracted from a series of images of the region. These changes may manifest as color changes to certain regions (pixels) in the images, may be identified and/or utilized by the computer, and are usually not recognizable to the naked eye. Additionally, stating that a computer performs a calculation based on a certain value that is recognizable in certain data does not necessarily imply that the computer explicitly extracts the value from the data. For example, the computer may perform its calculation without explicitly extracting the iPPG signal. Rather, data that reflects the iPPG signal may be provided as input utilized by a machine learning algorithm. Many machine learning algorithms (e.g., neural networks) can utilize such an input without the need to explicitly calculate the value that is “recognizable”.

[0044] Herein, detecting the physiological response may mean detecting that the user is experiencing the physiological response, and/or that there is an onset of the physiological response. In the case of the physiological response being associated with one or more values (e.g., blood pressure), detecting the physiological response may mean calculating the one or more values. Detecting the physiological response may involve calculating one or more of the following values: an indication of whether or not the user is experiencing the physiological response, a value indicative of an extent to which the user is experiencing the physiological response, a duration since the onset of the physiological response, and/or a duration until an onset of the physiological response.

[0045] In some embodiments, the computer 780 detects the physiological response utilizing previously taken PPG signals of the user Having such previous values can assist the computer 780 to detect changes to blood flow that may be indicative of certain physiological responses. In some embodiments, previously taken PPG signals and/or images are used to generate baseline values representing baseline properties of the user’s blood flow. A baseline value may be calculated in various ways, such as a function of the average measurements of the user, a function of the situation the user is in, and/or a function of an intake of some substances.

[0046] There are various ways in which the computer 780 may utilize correlations between the PPG signal 783 and the iPPG signals to detect the physiological response. In some embodiments, the computer 780 may rely on the fact that due to the proximity of ROIi and ROI 2 (both being on the head and consequently, close by) the appearances of pulse waves at the different ROIs are highly correlated. This fact may be utilized by the computer 780 to identify fiducial points in the PPG signal 783, which is often a strong signal, and then to identify the corresponding fiducial points in the correlated iPPG signals (that are noisier than the PPG signal). Additionally or alternatively, when a using machine learning-based approach, at least some of the feature values used by the computer 780 may reflect values related to correlations between the PPG signal 783 and the iPPG signals (e.g., values of similarity and/or offsets between the PPG signal 783 and the iPPG signals). Both uses of correlations are elaborated on further below.

[0047] Because the PPG device 782 touches and occludes ROR, while the camera 784 does not occlude ROR, the PPG signal 783 extracted from the PPG device 782 usually has a much better signal-to-noise (SNR) compared to the iPPG signals extracted from the images 785 of ROR. In addition, the PPG signal 783 typically suffers much less from illumination changes compared to the iPPG signals.

[0048] Furthermore, because both ROR and ROR are on the user’s head, and because the PPG device 782 and the camera 784 measure the user essentially simultaneously, manifestation of the pulse arrival in the PPG signal 783 and the iPPG signals are typically highly correlated (e.g., the signals exhibit highly correlated pulse arrival times). This correlation enables the computer 780 to utilize pulse fiducial points identified in the PPG signal 783 to extract information from iPPG signals more efficiently and accurately. [0049] In one embodiment, the computer 780 extracts from the PPG signal 783 one or more values that may serve as a basis to correlate between the PPG signal 783 and the iPPG signals. Optionally, the extracted values are indicative of one or more of the following PPG waveform fiducial points: a systolic peak, a dicrotic notch, and/or a diastolic peak. Optionally, the extracted values may be indicative of a timing of a certain fiducial point, and/or the magnitude of the PPG signal 783 at the time corresponding to the certain fiducial point. Additionally, the extracted values may be indicative of other waveform properties, such as an interbeat interval, and a systolic -diastolic peak-to-peak time.

[0050] Knowing an identification of fiducial points in the PPG signal 783 provides useful information for determining when these events are to be expected to manifest in the noisy iPPG signals. An offset used between when a fiducial point occurs in the PPG signal 783, and when it manifests in each of the iPPG signals, may be a fixed offset or may have different offsets that are calculated empirically relative to the timings of the PPG signal. Optionally, the iPPG signals are extracted from the images based on values of time-segments in which the iPPG signals were expected to appear as a function of the locations of respective regions of the iPPG signals relative to the location of the contact PPG device. Optionally, times corresponding to fiducial points, as determined based on the PPG signal 783, may be used to set a range of times during which the same fiducial point is expected to manifest in an iPPG signal, and the range may be adjusted according to values such as the heart rate and/or blood pressure, and may also be learned for a specific user.

[0051] Another way to describe the benefit of measuring simultaneously the PPG signal 783 and iPPG signals from images 785 involves the fact that often the iPPG signals are weak relative to the noise. Therefore, automatic detection of the iPPG signals requires discrimination between true PPG pulses and random fluctuations due to the noise. In one embodiment, an algorithm for the selection of the iPPG pulses is based on the values of time-segments in which the iPPG signals are expected to appear as a function of their location relative to the location of the PPG device 782. Optionally, the detected iPPG signals in these time-segments are identified as iPPG signals if they meet one or more criteria based on (i) the spatial waveform of the iPPG signals relative to the reference PPG signal, (ii) correlation between each iPPG signal in the current time-segment and a predetermined number of neighboring time-segments, and (iii) correlations between iPPG signals extracted from neighboring regions of exposed skin on the head, which are expected to show essentially the same rhythm with a bounded time delay. Optionally, the signals are taken as iPPG signals if minimal values of the criteria are obtained in several time -segments. The minimal values and the number of time-segments can be determined in order to achieve minimal standard deviation of the differences between the values of the heart rate extracted from the noisy iPPG signals and the reference heart rate extracted from the less noisy PPG signal.

[0052] As part of the calculations involved in detecting the physiological response, the computer 780 may perform various filtering and/or processing procedures to the PPG signal 783, the images 785, and/or iPPG signals extracted from the images 785. Some non-limiting examples of the preprocessing include: normalization of pixel intensities (e.g., to obtain a zero-mean unit variance time series signal), and conditioning a time series signal by constructing a square wave, a sine wave, or a user defined shape, such as that obtained from an ECG signal or a PPG signal as described in US patent number 8617081.

[0053] The computer may utilize a machine learning approach to detect the physiological response. This approach may involve the computer generating feature values based on data that includes the PPG signal 783, the images 785 and/or iPPG signals recognizable in the images 785, and optionally other data. Optionally, at least some of the feature values are based on correlations between the PPG signal 783 and the iPPG signals. The computer 780 then utilizes a previously trained model 779 to calculate one or more values indicative of whether, and/or to what extent, the user is experiencing the physiological response. [0054] Feature values generated based on PPG signals (e.g., the PPG signal 783 and/or one or more of the iPPG signals extracted from the images 785) may include various types of values, which may be indicative of dynamics of the blood flow at the respective regions to which the PPG signals correspond. Optionally, these feature values may relate to properties of a pulse waveform, which may be a specific pulse waveform (which corresponds to a certain beat of the heart), or a window of pulse waveforms (e.g., an average property of pulse waveforms in a certain window of time).

[0055] Some examples of feature values that may be generated based on a pulse waveform include: the area under the pulse waveform, the amplitude of the pulse waveform, a derivative and/or second derivative of the pulse waveform, a pulse waveform shape, pulse waveform energy, and pulse transit time (to the respective ROI). Optionally, some feature values may be derived from fiducial points identified in the PPG signals; these may include values such as magnitudes of the PPG signal at certain fiducial points, time offsets between different fiducial points, and/or other differences between fiducial points. Some examples of fiducial point-based feature values may include one or more of the following: a magnitude of a systolic peak, a magnitude of a diastolic peak, duration of the systolic phase, and duration of the diastolic phase. Additional examples of feature values may include properties of the cardiac activity, such as the heart rate and heart rate variability (as determined from the PPG signal). Additionally, some feature values may include values of other physiological signals that may be calculated based on PPG signals, such as blood pressure and cardiac output.

[0056] The aforementioned feature values may be calculated in various ways. In one example, some feature values are calculated for each PPG signal individually. In another example, some feature values are calculated after normalizing a PPG signal with respect to previous measurements from the corresponding PPG device used to measure the PPG signal. In other examples, at least some of the feature values may be calculated based on an aggregation of multiple PPG signals (e.g., different pixels/regions in images captured by an iPPG device), or by aggregating values from multiple contact PPG devices.

[0057] Some of the feature values may include values indicative of correlations between the PPG signal 783 and iPPG signals extracted from the images 785. Examples of these correlations include: (1) offsets between when certain fiducial points appear in the PPG signal 783 and when they appear in each of the iPPG signals, (2) offsets at which the correlation (e.g., as calculated by a dot-product) between the PPG signal 783 and the iPPG signals is maximized, and (3) maximal value of correlation between the PPG signal 783 and the iPPG signals (when using different offsets).

[0058] In some embodiments, at least some of the feature values may be “raw” or minimally processed measurements of the PPG device 782 and/or the camera 784. Optionally, at least some of the feature values may be pixel values obtained by the camera 864. Optionally, the pixel values may be provided as input to functions in order to generate the feature values that are low-level image-based features.

[0059] In some embodiments, at least some feature values may be generated based on other data sources (in addition to PPG signals), such as movement sensors, thermal cameras, environment temperature and humidity, extent of illumination, and the user’s age, sex, weight, body mass index, skin tone, and/or situation. For example, the computer 780 may generate at least some of the feature values based on one or more values indicative of: a temperature of the user’s body, movement of the user’s body (form a head-mounted Inertial Measurement Unit, IMU 778), consumption of a substance by the user, and/or timing of the user’s inhalation phase and/or exhalation phase (respiratory-phase signal).

[0060] The model 779 utilized to detect the physiological response may be generated, in some embodiments, based on data obtained from one or more users. In the case where the physiological response is a certain medical condition (e.g., an allergic reaction and or a migraine), at least some of the data used to train the model 779 corresponds to times in which the one or more users were not affected by the physiological response, and additional data used to train the model was obtained while the physiological response occurred and/or following that time. Thus, this training data may reflect PPG signals and/or blood flow both at normal times, and changes to PPG signals and/or blood flow that may ensue due to the physiological response. In the case where the physiological response corresponds to a value of a physiological signal (e.g., blood pressure), data used to train the model 779 may include measurements of the one or more users that are associated with a reference value for the physiological signal (e.g., the reference values may be blood pressure values measured by an external device).

[0061] The aforementioned training data may be used to generate samples, each sample including feature values generated based on PPG signals of a certain user, additional optional data (as described above), and a label. The PPG signals include measurements of the certain user (taken with the PPG device 782 and the camera 784) at a certain time, and optionally previous measurements of the user taken before the certain time. The label is a value related to the physiological response (e.g., an indication of the extent of the physiological response). The label may be indicative of: whether the user experienced a certain physiological response, the extent of the physiological response, the duration until an onset of the physiological response, and/or the duration that has elapsed since the onset of the physiological response. [0062] In some embodiments, the model 779 may be generated, at least in part, based on data that includes previous measurements of the specific user, and/or based on data of other users. In order to achieve a robust model, the samples used for the training of the model 779 may include samples collected on different days, while indoors and outdoors, and while different environmental conditions persisted. [0063] Utilizing the model 779 to detect the physiological response may involve the computer 780 performing various operations, depending on the type of model. The following are some examples of various possibilities for the model 779 and the type of calculations that may be accordingly performed by the computer 780 to calculate the physiological response: (a) the model 779 comprises parameters of a decision tree. Optionally, the computer 780 simulates a traversal along a path in the decision tree, determining which branches to take based on the feature values. The certain value may be obtained at the leaf node and or based on calculations involving values on nodes and/or edges along the path; (b) the model 779 comprises parameters of a regression model (e.g., regression coefficients in a linear regression model or a logistic regression model). Optionally, the computer 780 multiplies the feature values (which may be considered a regressor) with the parameters of the regression model in order to obtain the certain value; and/or (c) the model 779 comprises parameters of a neural network. For example, the parameters may include values defining at least the following: (i) an interconnection pattern between different layers of neurons, (ii) weights of the interconnections, and (iii) activation functions that convert each neuron’s weighted input to its output activation. Optionally, the computer 780 provides the feature values as inputs to the neural network, computes the values of the various activation functions and propagates values between layers, and obtains an output from the network, which is the certain value [0064] In some embodiments, the machine learning approach may be deep learning, and the model 779 may include parameters describing multiple hidden layers of a neural network. Optionally, the model 779 may include a convolution neural network (CNN). In one example, the CNN may be utilized to identify certain patterns in the images 785, such as the patterns of corresponding to blood volume effects and ballistocardiographic effects of the cardiac pulse. Due to the fact that calculating the value indicative of the extent of the physiological response may be based on multiple, possibly successive, images that display a certain pattern of change over time (i.e., across multiple frames), these calculations may involve retaining state information that is based on previous images. Optionally, the model 779 may include parameters that describe an architecture that supports such a capability. In one example, the model 779 may include parameters of a recurrent neural network (RNN), which is a connectionist model that captures the dynamics of sequences of samples via cycles in the network’s nodes. This enables RNNs to retain a state that can represent information from an arbitrarily long context window. In one example, the RNN may be implemented using a long short-term memory (LSTM) architecture. In another example, the RNN may be implemented using a bidirectional recurrent neural network architecture (BRNN).

[0065] Monitoring blood glucose level with a comfortable head-mounted device.

[0066] The blood glucose level is a very important value for diabetes patients to keep track of. Blood glucose testing provides useful information for diabetes management, and physicians recommend diabetes patients monitor their blood glucose level several times a day. This is often quite inconvenient and painful, since most blood glucose monitoring techniques involve invasive drawing of blood samples. Thus, there is a need for a more convenient way to monitor blood glucose levels, without the need to draw blood samples multiple times a day on a continuous basis.

[0067] This paragraph discloses claims that the Applicant may file in a divisional patent application. The order of the optional dependent claims below is not limiting, and the dependent claims may be arranged according to any order and multiple dependencies. 1. A system configured to calculate blood glucose level, comprising: a head-mounted contact photoplethysmography device configured to measure a signal indicative of a photoplethysmogram signal (PPG signal) at a first region comprising skin on a user’s head; a head-mounted camera configured to capture images of a second region comprising skin on the user’s head; and a computer configured to: identify, based on the PPG signal, times of systolic notches and times of systolic peaks; and calculate the blood glucose level based on differences between a first subset of the images taken during the times of systolic notches and a second subset of the images taken during the times of systolic peaks. 2. The system of claim 1, wherein the systolic notch is a minimum of a pulse wave in the PPG signal and the systolic peak is the maximum of the pulse wave; and wherein the head-mounted camera is sensitive to at least three noncoinciding wavelength intervals, the head-mounted camera is located more than 5 mm away from the second region, and both the head-mounted camera and the head-mounted contact photoplethysmography device are physically coupled to smartglasses or to a smart-helmet, which is designed to measure the user in day-to-day activities, over a duration of weeks, months, and/or years. 3. The system of claim 1, wherein the first and second regions are fed by different arteries, which cause a time difference between the times of systolic peaks in the PPG signal and times of systolic peaks in imaging photoplethysmogram signals recognizable the images; and wherein the computer is further configured to calculate the time difference, and to calculate the blood glucose level also based on the time difference. 4. The system of any one of claims 1 to 3, wherein the computer is further configured to extract: imaging photoplethysmogram signals (iPPG signals) from the images, and to calculate the blood glucose level based on differences between amplitudes of the iPPG signals during the times of systolic notches and the iPPG signals during the times of systolic peaks. 5. The system of any one of claims 1 to 3, wherein the computer is further configured to: calculate a first hemoglobin concentration pattern based on the first subset of the images, calculate a second hemoglobin concentration pattern based on the second subset of the images, and to calculate the blood glucose level based on differences between the first and second hemoglobin concentration patterns. 6. The system of any one of claims 1 to 3, wherein the computer is further configured to: extract a first set of facial flushing patterns based on the first subset of the images, extract a second set of facial flushing patterns based on the second subset of the images, and to calculate the blood glucose level based on differences between the first and second facial flushing patterns. 7. The system of any one of claims 1 to 3, wherein the computer is further configured to utilize a machine learning-based model to calculate, based on feature values generate from the first and second subsets of the images, a value indicative of the blood glucose level; wherein the machine learning -based model was trained based on data comprising: 3 rd and 4 th subsets of the images, taken during the times of systolic notches and systolic peaks, respectively, while the user had blood glucose level that was between 70 and 100; 5 th and 6 th subsets of the images, taken during the times of systolic notches and systolic peaks, respectively, while the user had blood glucose level that was between 100 and 125; 7 th and 8 th subsets of the images, taken during the times of systolic notches and systolic peaks, respectively, while the user had blood glucose level that was between 120 and 150; and 9 th and 10 th subsets of the images, taken during the times of systolic notches and systolic peaks, respectively, while the user had blood glucose level that was between 150 and 180. 8. The system of any one of claims 1 to 3, further comprising a head-mounted temperature sensor configured to measure skin temperature (T skin ) at a third region on a user’s head; and wherein the computer is configured to (i) generate feature values based on: the PPG signal, the images, and T skin , and (ii) utilize a machine learning-based model to calculate the blood glucose level based on the feature values. 9. A method for calculating blood glucose level, comprising: receiving, from a head -mounted contact photoplethysmography device, a signal indicative of a photoplethysmogram signal (PPG signal) at a first region comprising skin on a user’s head; receiving, from a head-mounted camera, images of a second region comprising skin on the user’s head; identifying, based on the PPG signal, times of systolic notches and times of systolic peaks; and calculating the blood glucose level based on differences between a first subset of the images taken during the times of systolic notches and a second subset of the images taken during the times of systolic peaks. 10. The method of claim 9, further comprising extracting imaging photoplethysmogram signals (iPPG signals) from the images, and calculating the blood glucose level based on differences between amplitudes of the iPPG signals during the times of systolic notches and the iPPG signals during the times of systolic peaks. 11. The method of claim 9, further comprising calculating a first hemoglobin concentration pattern based on the first subset of the images, calculating a second hemoglobin concentration pattern based on the second subset of the images, and calculating the blood glucose level based on differences between the first and second hemoglobin concentration patterns. 12. A system configured to calculate blood glucose level, comprising: a head-mounted contact photoplethysmography device configured to measure a signal indicative of a photoplethysmogram signal (PPG signal) at a first region comprising skin on a user’s head; a head-mounted camera configured to capture images of a second region comprising skin on the user’s head; a first head-mounted temperature sensor configured to measure skin temperature (T skin ) at a first region on a user’s head; a second head-mounted temperature sensor configured to measure temperature of the environment (T env ); and a computer configured to: generate feature values based on: the PPG signal, the images, T skin , and T env ; and utilize a machine learning-based model to calculate the blood glucose level based on the feature values. 13. The system of claim 12, wherein the head -mounted camera is sensitive to at least three noncoinciding wavelength intervals (channels), and the computer is configured to generate the feature values based on the images by extracting separate imaging photoplethysmogram signals (iPPG signals) from the images, for each of the channels. 14. The system of claim 12, wherein the machine learning-based model is configured to utilize T skin to compensate for effects of skin temperature on the PPG signal and the iPPG signal, and to utilize T env to compensate for effects of physiologic changes related to regulating the user’s body temperature on the PPG signal and the iPPG signal. 15. The system of claim 12, further comprising an outward-facing head-mounted camera configured to take images of the environment; and wherein the computer is further configured to generate the feature values also based on the images of the environment; whereby the images of the environment are indicative of ambient illumination levels, and are used by the machine learning-based model to improve the accuracy of the calculating of the blood glucose level based on the images.

[0068] Noninvasive glucose monitoring devices are pain-free and enable taking multiple measurements of glucose levels per day. It is believed that, for some people, the variation of the facial skin color with the heartbeats changes as a function of their blood glucose level. Different facial skin regions show different variations of the facial skin color, and these different variations are also believed to change, for some people, as a function of their blood glucose level. However, these changes are small and cannot be extracted solely from images of skin on the face. Therefore, in some embodiments described herein, a contact photoplethysmography (PPG) device is used to detect times of systolic notches and times of systolic peaks. These times are used to identify which of the images were taken during the times of systolic notches and which of the images were taken during the times of systolic peaks, and based on that, the facial skin colors during the systolic notches and systolic peaks can be compared more accurately in order to calculate the blood glucose level.

[0069] Some of the embodiment herein may be implemented using sensors coupled to smartglasses in order to conveniently, and optionally continuously, monitor the user. Smartglasses are generally comfortable to wear, lightweight, and can have extended battery life. Thus, they are well suited as an instrument for long-term monitoring of patient’s physiological signals and activity.

[0070] FIG. 4 illustrates an embodiment of a system that calculates blood glucose levels. Embodiments of the system may utilize different types of sensors, which may include a head-mounted contact photoplethysmography device 480 (“PPG device 480”), an inward-facing head-mounted camera 483 (“camera 483”), and a computer 490. Embodiments of the system may optionally include additional components, such as one or more of the following: a head -mounted skin temperature sensor 494 (“skin temperature sensor 494”), a head-mounted environment temperature sensor 496 (“environment temperature sensor 496”), a head-mounted outward-facing camera 498 (“outward-facing camera 498”), and a head-mounted hygrometer 499.

[0071] FIG. 2 (discussed above in more details) illustrates smartglasses that include camera 483 to capture the images 485, and several optional contact PPG devices (791a, 791b and 791c, which correspond to the PPG device 480) that are used to measure the PPG signal 481.

[0072] In one embodiment, the PPG device 480 measures a signal indicative of a photoplethysmogram signal (PPG signal 481) at a first region comprising skin on a user’s head. Examples for the first region are portions of the user’s nose, temples, and/or skin on a mastoid process on one of the sides of the user’s head. The PPG device 480 may include one or more light sources to illuminate the first region. For example, the one or more light sources may include light emitting diodes (LEDs) that illuminate the first region. Optionally, the one or more LEDs include at least two LEDs, where each illuminates the first region with light at a different wavelength. In one example, the at least two LEDs include a first LED that illuminates the first region with green light and a second LED that illuminates the first regions with an infrared light. Optionally, the PPG device 480 includes one or more photodetectors configured to detect extents of reflections from the first region. In another example, the PPG device 480 includes four light sources, which may be monochromatic (such as 625 nm, 740 nm, 850 nm, and 940 nm), and a CMOS or CCD image sensor (without a near-infrared filter, at least until 945 nm). The PPG devices provides measurements of the light reflected from the skin, and the computer calculates the glucose levels based on associations between combinations of the reflected lights and the user’s blood glucose levels.

[0073] The camera 483 captures images 485 of a second region on the user’s head. In one example, the second region may include a portion of skin on one of the user’s cheeks (e.g., the region 484 illustrated in FIG. 5). In other examples, the second region may include a portion of skin on the user’s forehead and/or temples.

[0074] Head-mounted inward-facing cameras are typically small, lightweight, and include a CMOS or a CCD sensor. The camera may capture images at various rates, and may have various resolutions (for example, 8x8 pixels, 32x32 pixels, 640x480 pixels, or more). In some embodiments, the camera may capture light in the near-infrared spectrum (NIR). Optionally, such a camera may include optics and sensors that capture light rays in at least one of the following NIR spectrum intervals: 700-800 nm, 700- 900 nm, and 700-1,050 nm. Optionally, the sensors may be CCD and/or CMOS sensors designed to be sensitive in the NIR spectrum.

[0075] In some embodiments, the system may include a light source configured to direct electromagnetic radiation at the second region. Optionally, the light source comprises one or more of the following: a laser diode (LD), a light-emitting diodes (LED), and an organic light-emitting diode (OLED). It is to be noted that when embodiments described in this disclosure utilize light sources directed at a region of interest (ROI), such as an area appearing in images 485, the light source may be positioned in various locations relative to the ROI (perpendicular, not perpendicular, without occluding the ROI, etc.). In one example, the system includes four light sources, which may be monochromatic (such as 625 nm, 740 nm, 850 nm, and 940 nm), and the camera sensor does not include a near-infrared filter (at least until 945 nm). The camera captures images of lights emitted from the light sources and reflected from the second region of skin, and the computer calculates the glucose levels based on associations between combinations of the reflected lights and the user’s blood glucose levels. Optionally, the system further includes an outward-facing camera 498 having a color filter similar to the inward-facing camera 483, such that the images captured by the outward-facing camera 498 are utilized by the computer 490 to compensate for interferences from the environment which reduce the signal to noise ratio of the reflected lights captured in images 485.

[0076] The computer 490 is configured, in some embodiments, to identify, based on the PPG signal 481, times of systolic notches and times of systolic peaks. The computer 490 then calculates a blood glucose level 492 based on differences between a first subset of the images 485 taken during the times of systolic notches and a second subset of the images 485 taken during the times of systolic peaks.

[0077] In different embodiments, a reference to “the computer” may refer to different components and/or a combination of components. In some embodiments, the computer may include a processor located on a head-mounted device. In other embodiments, at least some of the calculations attributed to the computer may be performed on a remote processor. Thus, references to calculations being performed by the “computer” should be interpreted as calculations being performed utilizing one or more computers, with some of these one or more computers possibly being attached to a head-mounted device to which the PPG device and/or the camera are coupled. Examples of such computers are computer 400 or computer 410, illustrated in FIG. 14 and FIG. 15, respectively.

[0078] A systolic peak of a pulse wave is a fiducial point corresponding to a maximum value of a PPG signal of the pulse wave. Similarly, a systolic notch of the pulse wave is a fiducial point corresponding to a minimum value of the PPG signal of the pulse wave.

[0079] Herein, the alternative terms “blood glucose level”, “blood sugar level”, and “blood sugar concentration” may be used interchangeably and all refer to the concentration of glucose present in the blood, which may be measured in milligrams per deciliter (mg/dL). [0080] Calculation of the blood glucose level 492 may involve the computer 490 utilizing an approach that may be characterized as involving machine learning. In some embodiments, this may involve the computer 490 generating feature values based on data that includes the first and second subsets of the images 485 and/or the PPG signal 481. Optionally, the computer 490 utilizes a model 491, which was previously trained, to calculate, based on the feature values, the blood glucose level 492. Optionally, the computer 490 forwards a value indicative of the blood glucose level 492 to a device of the user and/or to another computer system.

[0081] Generally, machine learning-based approaches utilized by embodiments described herein involve training a model on samples, with each sample including: feature values generated based on certain PPG signals measured by the PPG device 480, certain images taken by the cameras 483, and optionally other data, which were taken during a certain period, and a label indicative of the blood glucose level during the certain period, as determined by an external measurement device (e.g., from analysis of a blood sample). Optionally, a label indicative of the blood glucose level may be provided by the user, by a third party, and/or by a device used to measure the user’s blood glucose level, such as a finger-stick blood test, a test strip, a portable meter, and/or a continuous glucose testing placed under the skin. Optionally, a label may be extracted based on analysis of electronic health records of the user, e.g., records generated while being monitored at a medical facility.

[0082] In some embodiments, the model 491 may be personalized for the user by training the model on samples that include: feature values generated based on measurements of the user, and corresponding labels indicative of the blood glucose level of the user while the measurements were taken (for example using finger-stick blood samples, test strips, portable meters, and/or a continuous glucose testing placed under the skin). In some embodiments, the model 491 may be generated based on measurements of multiple users, in which case, the model 491 may be considered a general model. Optionally, a model generated based on measurements of multiple users may be personalized for a certain user by being retrained on samples generated based on measurements of the certain user.

[0083] In order to achieve a robust model the samples used for the training of the model 491 may include samples based on data collected for different conditions. Optionally, the samples are generated based on data (that includes trusted blood glucose level readings) collected on different days, while indoors and outdoors, and while different environmental conditions persisted.

[0084] In order to more accurately calculate blood glucose levels, training data utilized to generate the model 491 may include samples with labels in various ranges, corresponding to different blood glucose levels. This data includes other subsets of the images 485, which were taken prior to when the first and second subsets of the images 485 were taken (which are used to calculate the blood glucose level 492). In one example, training data used to generate the model 491 includes the following data: 3 rd and 4 th subsets of the images 485, taken during the times of systolic notches and systolic peaks, respectively, while the user had blood glucose level that was between 70 and 100; 5 th and 6 th subsets of the images 485, taken during the times of systolic notches and systolic peaks, respectively, while the user had blood glucose level that was between 100 and 125; 7 th and 8 th subsets of the images 485, taken during the times of systolic notches and systolic peaks, respectively, while the user had blood glucose level that was between 120 and 150; and 9 th and 10 th subsets of the images 485, taken during the times of systolic notches and systolic peaks, respectively, while the user had blood glucose level that was between 150 and 180. The images may include one or more colors. In one example, the images include three colors. In another example, the images include three colors in the visible range and one color in the NIR range. In still another example, the images include at least two colors in the visible range and at least two colors in the NIR range.

[0085] There are different ways in which the computer 490 may identify, based on the PPG signal 481, the times of systolic notches and the times of systolic peaks. In some embodiments, the identification of those times is done by providing the PPG signal 481, and/or feature values derived therefrom, as an input to a machine learning-based predictor that calculates the blood glucose level 492 (e.g., a neural network- based predictor). Thus, feature values generated based on images may be correlated with the intensity of the PPG signal. Therefore, in such cases, the “identification” of the times of the systolic peaks and the times of the systolic notches may be a step that is implicitly performed by the neural network, and it need not be an explicit, separate step that precedes the calculation of the blood glucose level, rather it is a process that is an integral part of that calculation.

[0086] In some embodiments, the computer 490 identifies, based on the PPG signal 481, times of systolic notches and times of systolic peaks. The PPG device 480 touches and occludes the first region, while the camera 483 is not in direct contact with the second region. Therefore, the PPG signal 481 usually has a much better signal-to-noise (SNR) compared to iPPG signals extracted from the images 485, and the correlations between the PPG signal 481 and iPPG signals are used to improve the quality of the iPPG signals as explained above.

[0087] FIG. 5 illustrates a scenario in which certain images, from among the images 485, are selected based on times of systolic notches and systolic peaks identified in the PPG signal 481. In this example, the PPG device 480 is located in the nose piece of the glasses 482 and measures the first region. The camera 483 is located on the front-end of a temple of the glasses 482 and is oriented downward, such that it captures images of the second region, illustrated as region 484 on the user’s cheek. FIG. 5 shows an alignment between the PPG signal 481 and the images 485 (i.e., images taken appear above the value of the PPG signal measured when the images were taken). For each systolic peak and systolic notch in the PPG signal 481, a vertical line indicates one or more corresponding images from among the images 485. Images corresponding to systolic peaks are marked with a bold border, while images corresponding to systolic notches are marked with a dash border. In the figure, each systolic peak and notch has two corresponding images, but in various implementations, this number may vary (and need not be a fixed number). The computer 490 receives a first subset 485’ of images corresponding to the systolic notches and a second subset 485” of images corresponding to the systolic peaks, and calculates the blood glucose level 492 based on a difference between these two subsets of images, using one or more of the techniques described herein.

[0088] In order to calculate the blood glucose level 492, the computer 490 may evaluate various types of differences between the first subset of the images 485 taken during the times of systolic notches and the second subset of the images taken during the times of systolic peaks. It is noted that the differences are not limited to the first and second subsets of the images, and may include additional subsets as well. [0089] In some embodiments, at least some of the feature values utilized by the computer 490 to calculate the glucose blood level 492 include first and second sets of feature values generated from the first and second subsets of the images 485, respectively. Optionally, the differences between the first and second subsets of the images 485 determined from the differences between first and second sets of feature values. For example, the first/second set of feature values may include feature values indicative of one or more of the following: amplitudes of iPPG signals extracted from images in the first/second subset of the images 485, slopes of iPPG signals extracted from images in the first/second subset of the images 485, a first hemoglobin concentration pattern based on the first/second subset of the images 485, and a first set of facial flushing patterns based on the first/second subset of the images 485.

[0090] In some embodiments, at least some of the feature values utilized by the computer 490 to calculate the glucose blood level 492 include a set of feature values generated by comparing the first and second subsets of the images 485 (and optionally other subsets as well), and calculating values representing differences between values extracted from the first and second subsets of the images 485 (and optionally the other subsets as well). For example, the set of feature values may include feature values indicative of one or more of the following: (i) a difference in amplitudes of iPPG signals extracted from images in the first subset of the images 485 and amplitudes of iPPG signals extracted from images in the second subset of the images 485; this difference may depend on the specific values of the different color channels in the images, (ii) a difference between a first hemoglobin concentration pattern based on the first subset of the images 485 and a second hemoglobin concentration pattern based on the second subset of the images 485; this difference may also depend on the specific values of the different color channels, and (iii) a difference between a first set of facial flushing patterns based on the first subset of the images 485 and a second set of facial flushing patterns based on the first subset of the images 485; this difference may also depend on the specific values of the different color channels.

[0091] The following are some examples of processing methods that may be applied to at least some of the images in order to calculate various values (e.g., iPPG signals, hemoglobin concentration patterns, and/or facial flushing patterns) that may be utilized by the computer to calculate the blood glucose level and other parameters. In some embodiments, one or more of the processing methods may be applied by the computer 490 before the various values are used to calculate the blood glucose level 492 (e.g., the preprocessing methods are applied to generate feature values that are fed as input to a neural network). In some embodiments, one or more of the processing methods may be applied by the computer 490 as part of the calculations used to calculate the blood glucose level 492 directly. For example, some layers and/or portions of a deep learning network used by the computer 490 may implement processing operations of the images (e.g., which are involved in calculating the hemoglobin concentration patterns), while other portions of the deep learning network are used to perform calculations on values representing the hemoglobin concentration patterns (in order to calculate the blood glucose level 492).

[0092] Various preprocessing approaches may be utilized in order to assist in calculating the various values described above, which are calculated from the at least some of the images. Some non-limiting examples of the preprocessing approaches that may be used include: normalization of pixel intensities (e.g., to obtain a zero-mean unit variance time series signal), and conditioning a time series signal by constructing a square wave, a sine wave, or a user defined shape. Images may undergo various preprocessing to improve the signal, such as color space transformation, blind source separation, and various filtering techniques.

[0093] Another approach that may be utilized in order to assist in calculating the various values described above, which are calculated from the at least some of the images 485, involves Eulerian video magnification, as described in Wu, Hao-Yu, et al. "Eulerian video magnification for revealing subtle changes in the world." ACM transactions on graphics (TOG) 31.4 (2012): 1-8, and also in the hundreds of references citing this reference. The goal of Eulerian video magnification is to reveal temporal variations in videos that are difficult or impossible to see with the naked eye and display them in an indicative manner. This method takes a standard video sequence as input, and applies spatial decomposition, followed by temporal filtering to the frames. The resulting signal is then amplified to reveal hidden information. This method is successfully applied in many applications in order to visualize the flow of blood as it fills the face and also to amplify and reveal small motions.

[0094] Yet another approach that may be utilized in order to assist in calculating the various values described above, which are calculated from the at least some of the images 485, involves accentuating the color of facial flushing in the images. In one example, facial flushing values are calculated based on applying decorrelation stretching to the images (such as using a three color space), then applying K- means clustering (such as three clusters corresponding to the three color space), and optionally repeating the decorrelation stretching using a different color space. In another example, facial flushing values are calculated based on applying decorrelation stretching to the images (such as using a three color space), and then applying a linear contrast stretch to further expand the color range.

[0095] iPPG signals extracted from the images 485, can provide indications of the extent of blood flow at the second region. In some embodiments, the computer 490 extracts iPPG signals from the images 485, and calculates the blood glucose level 492 based on differences between values (such as amplitudes and/or slopes) of the iPPG signals during the times of systolic notches and values of the iPPG signals during the times of systolic peaks. It is noted that the term “based on” is an open statement that may include additional differences, such as in this case, features relative to additional iPPG signals recognizable in images taken during times other than the systolic peaks and notches. Additionally or alternatively, the differences may be indicative of different associations between iPPG signals recognizable in the different color channels in the images, which are related to different blood glucose levels.

[0096] Identifying the systolic peaks and notches may be done using one or more of the techniques known in the art, and/or described herein, that may be used to identify landmarks in a cardiac waveform (e.g., systolic peaks, diastolic peaks), and/or extract various types of known values that may be derived from the cardiac waveform.

[0097] In some embodiments, the camera 483 is sensitive to at least three noncoinciding wavelength intervals, such that the images include at least three channels. Optionally, the computer 490 generates at least some of the feature values based on the images 485 by extracting separate iPPG signals from each of the at least three channels in the images 485. Optionally, the feature values described herein as being generated based on iPPG signals may include separate feature values generated from iPPG signals extracted from different channels. Optionally, the computer 490 utilizes correlations between the PPG signal 481 and the separate iPPG signals in order to calculate the blood glucose level 492.

[0098] Blood flow in the face can cause certain facial coloration due to concentration of hemoglobin in various vessels such as arterioles, capillaries, and venules. Coloration at a certain facial region, and/or changes thereto, can represent a hemoglobin concentration pattern at the certain region. This pattern can change because of various factors that can affect blood flow and/or vascular dilation, such as the external temperature, core body temperature, the emotional state, consumption of vascular dilating substances, and more. Hemoglobin concentration patterns may also provide a signal from which the computer 490 may calculate the blood glucose level 492. In one embodiment, the computer 490 calculates a first hemoglobin concentration pattern based on the first subset of the images 485, calculates a second hemoglobin concentration pattern based on the second subset of the images 485, and calculates the blood glucose level 492 based on differences between the first and second hemoglobin concentration patterns.

[0099] Optionally, values in a hemoglobin concentration pattern may be mapped to specific regions on the face, such that the hemoglobin concentration pattern may be considered a layer or grid that can be mapped onto the face in a predetermined manner. In some embodiments, a hemoglobin concentration pattern calculated from images refers to a color mapping of various portions of an area captured in the images. The color mapping may provide values that are average intensities of one or more colors of the pixels over a period of time during which the images were taken, may provide values that are maximum intensities of one or more colors of the pixels, and/or may be a function of one or more colors (channels) of the pixels. In yet other embodiments, a hemoglobin concentration pattern may refer to a contour map, representing the extent to which pixels at a certain wavelength.

[0100] A hemoglobin concentration pattern, such as one of the examples described above, may be calculated, in some embodiments, from images by a computer, such as computer 490. Optionally, the hemoglobin concentration pattern may be utilized to generate one or more feature values that are used in a machine learning -based approach by the computer 490.

[0101] Facial flushing patterns may also provide a signal from which the computer 490 may calculate the blood glucose level 492. In one embodiment, the computer 490 extracts a first set of facial flushing patterns based on the first subset of the images 485, extracts a second set of facial flushing patterns based on the second subset of the images 485, and calculates the blood glucose level 492 based on differences between the first and second facial flushing patterns.

[0102] Pulse transit times are another type of value that may provide a signal, from which the computer 490 may calculate the blood glucose level 492. In one embodiment, the first and second regions are fed by different arteries, which cause a time difference between the times of systolic peaks in the PPG signal 481 and times of systolic peaks in iPPG signals recognizable the images 485. The computer 490 calculates the aforementioned time difference, and utilizes the time difference to calculate the blood glucose level 492. The associations between the amplitudes of the different color channels in the images, as a function of the pulse transit times, may be are another type of input value for the calculation.

[0103] In some embodiments, at least some feature values may describe properties of the cardiac waveform in iPPG signals derived from subsets of the images. To this end, the computer may employ various approaches known in the art to identify landmarks in a cardiac waveform (e.g., systolic peaks, diastolic peaks), and/or extract various types of known values that may be derived from the cardiac waveform.

[0104] In some embodiments, one or more of the feature values utilized by the computer may be generated based on additional inputs from sources, other than the PPG device and the camera, such as stress level, hydration level, Skin temperature, environment temperature, and humidity.

[0105] Variations in the reflected ambient light may introduce artifacts into images collected with inward-facing cameras that can add noise to these images and make detections and/or calculations based on these images less accurate. In some embodiments, the system includes an outward-facing camera (such as 498), which is coupled to the smartglasses, and takes images of the environment. The computer may generate, based on the images of the environment, one or more feature values indicative of ambient illumination levels during the times at which the images 485 were taken with the camera 483, and utilizes the one or more feature values indicative of the ambient illumination levels to improve the accuracy of the calculation of the blood glucose level 492, based on the images 485, the PPG signal 481, and optionally other data sources described herein. The outward-facing head-mounted camera 498 may be a thermal camera and/or a camera sensitive to wavelengths below 1050 nanometer.

[0106] In some embodiments, the computer 490 calculates the blood glucose level utilizing previously taken PPG signals and images of the user, and generates baseline values representing baseline properties of the user’s blood flow at a known blood glucose level, as explained above.

[0107] In one non-limiting example, feature values generated by the computer 490 include: pixel values from the images 485 and magnitude values of the PPG signal 841. In another non-limiting example, feature values generated by the computer 490 include intensities of fiducial points (systolic peaks and systolic notches) identified in iPPG signals extracted from the images 485, at times corresponding to appearances of those fiducial points, as detected in the PPG signal 481.

[0108] In a similar manner to the types of calculations related to the model 779, model 491 may involve a decision tree, a regression model, a neural network, and/or deep learning.

[0109] Detecting fever from images and temperatures.

[0110] It is often difficult to detect fever, to calculate core body temperature, and/or to detect alcohol intoxication from facial measurements. This is especially true in some real-life circumstances, such as when an air conditioner hose is directed towards the user’s face, while sitting close to a heater, while being outside in the sun, or while being in cold wind. It is also difficult to measure the temperature of the environment by a smartwatch or a non-wearable temperature sensor, because the hand with the smartwatch may be covered, and the non-wearable temperature sensor may not be exposed to radiation or wind hitting the face.

[0111] However, detecting fever and calculating core body temperature are very important. Fever is a common symptom of many medical conditions: infectious disease, such as COVID-19, dengue, Ebola, gastroenteritis, influenza, Lyme disease, malaria, as well as infections of the skin. It is important to track fever in order to be able to identify when a person might be sick and should be isolated (when an infectious disease is suspected).

[0112] This paragraph discloses claims that the Applicant may file in a divisional patent application. The order of the optional dependent claims below is not limiting, and the dependent claims may be arranged according to any order and multiple dependencies. 1. A system configured to detect fever, comprising: a first head-mounted temperature sensor configured to measure skin temperature (T skin ) at a first region on a user’s head; a second head-mounted temperature sensor configured to measure temperature of the environment (T env ); and a computer configured to: receive images of a second region on the user’s face, captured by a camera sensitive to wavelengths below 1050 nanometer; calculate, based on the images, values indicative of hemoglobin concentrations at three or more regions on the user’s face; and detect whether the user has a fever based on T skin , T env , and the values. 2. The system of claim 1, wherein the first region covers a portion of skin that is not located above a portion of a branch of the external carotid artery, and the computer is configured to perform the detection of whether the user has the fever by (i) generating feature values based on: T skin , T env , and the values indicative of hemoglobin concentrations, and (ii) utilizing a machine learning-based model to calculate, based on the feature values, a values indicative of whether the user has the fever. 3. The system of claim 1, wherein the computer is further configured to utilize one or more calibration measurements of the user’s core body temperature, taken by a different device, to calibrate a model, and to utilize said calibrated model to calculate the user’s core body temperature based on T skin , T env , and the values; and the computer is further configured to share fever history of the user upon receiving a permission from the user. 4. The system of claim 1, wherein the computer is further configured to calculate the values indicative of hemoglobin concentrations based on detecting imaging photoplethysmogram signals in the images; and the computer is further configured to utilize one or more calibration measurements of the user’s core body temperature, taken by a different device, prior to a certain time, and to calculate the user’s core body temperature based on T skin , T env , and the values, which were taken after the certain time. 5. The system of any one of claims 1 to 4, wherein the first head-mounted temperature sensor is a contact temperature sensor that is not located above a portion of a branch of the external carotid artery, the second region is larger than 3 cm 2 , and the camera is not head-mounted and is configured to capture the images from a distance longer than 5 cm from the user’s head. 6. The system of any one of claims 1 to 4, wherein the first head -mounted temperature sensor comprises an inward-facing thermal camera located more than 2 mm away from the user’s head, the second region is larger than 2 cm 2 , and the camera is an inward-facing head-mounted camera configured to capture the images from a distance between 5 mm and 5 cm from the user’s head. 7. The system of any one of claims 1 to 4, further comprising a head-mounted acoustic sensor configured to take audio recordings of the user, and a head-mounted movement sensor configured to measure a signal indicative of movements of the user’s head (head-movement signal); wherein the computer is further configured to: (i) generate feature values based on T skin , T env , the values indicative of hemoglobin concentrations, the audio recordings, and the head-movement signal, and (ii) utilize a machine learning- based model to calculate, based on the feature values, level of dehydration of the user; wherein the feature values based on the audio recordings are indicative of the user’s respiration rate, and the feature values based on the head-movement signal are indicative of the user’s physical activity level. 8. The system of any one of claims 1 to 4, wherein the images comprise a first channel corresponding to wavelengths that are mostly below 580 nanometers and a second channel corresponding to wavelengths mostly above 580 nanometers; the values indicative of hemoglobin concentrations comprise: (i) first values derived based on the first channel in the images, and (ii) second values derived based on the second channel in the images; whereby having separate values for different wavelengths enables to account for interference from the environment when detecting whether the user has the fever. 9. The system of any one of claims 1 or 3 to 8, wherein the computer is further configured to detect of whether the user is intoxicated by (i) generating feature values based on: T skin , T env , and the values indicative of hemoglobin concentrations; and (ii) utilizing a machine learning-based model to calculate, based on the feature values, a values indicative of whether the user is intoxicated; wherein the machine learning-based model was trained based on previous data sets comprising: (i) T skin , T env , and previous values indicative of hemoglobin concentrations captured by the camera while the user was sober, and (ii) T skin , T env , and previous values indicative of hemoglobin concentrations captured by the camera while the user was intoxicated. 10. A method for detecting fever, comprising: receiving, from a first head-mounted temperature sensor, measurements of skin temperature (T skin ) at a first region on a user’s head; receiving, from a second head- mounted temperature sensor, measurements of temperature of the environment (T env ); receiving, from a camera sensitive to wavelengths below 1050 nanometer, images of a second region on the user’s face; calculating, based on the images, values indicative of hemoglobin concentrations at three or more regions on the user’s face; and detecting whether the user has a fever based on T skin , T env , and the values indicative of hemoglobin concentrations. 11. The method of claim 10, wherein the first region covers a portion of skin that is not located above a portion of a branch of the external carotid artery, and the detecting of whether the user has the fever comprises (i) generating feature values based on: T skin , T env , and the values indicative of hemoglobin concentrations, and (ii) utilizing a machine learning -based model to calculate, based on the feature values, a values indicative of whether the user has the fever. 12. The method of claim 10, further comprising detecting whether the user is intoxicated by (i) generating feature values based on: T skin , T env , and the values indicative of hemoglobin concentrations; and (ii) utilizing a machine learning-based model to calculate, based on the feature values, a values indicative of whether the user is intoxicated; wherein the machine learning -based model was trained based on previous data sets comprising: (i) T S ki n , T env , and previous values indicative of hemoglobin concentrations captured by the camera while the user was sober, and (ii) T skin , T env , and previous values indicative of hemoglobin concentrations captured by the camera while the user was intoxicated.

[0113] In some embodiment, images of a user’s face, as well as temperature measurements of the user’s face and the environment, are used for detecting fever, estimating core body temperature, detecting intoxication, and additional applications. The images may be captured using different hardware setups, such as one or more inward-facing head-mounted cameras and/or a non-head-mounted camera (such as a smartphone or a laptop camera).

[0114] FIG. 6 is a schematic illustration of components of additional embodiments of a system configured to detect fever and/or intoxication (e.g., due to alcohol consumption). The system includes at least a first head-mounted temperature sensor 372 that is configured to measure skin temperature (T skin 373) at a first region on a user’s head, and a second head-mounted temperature sensor 374 that is configured to measure a temperature of the environment (T env 375). Optionally, these temperature sensors are physically coupled to a frame of smartglasses 370. The system also includes a computer 380, which may or may not be physically coupled to the frame of the smartglasses 370. The system may include additional elements such as a movement sensor (e.g., inertial measurement unit (IMU) 378), a camera 376 that is sensitive to wavelengths below 1050 nanometer, an anemometer 382, an acoustic sensor 383, a hygrometer 384, and/or a user interface 388.

[0115] In one embodiment, the computer 380, receives images 377 of a second region on the user’s face, captured by the camera 376, and calculates, based on the images 377, values indicative of hemoglobin concentrations at three or more regions on the user’s face. The computer 380 then detects whether the user has a fever based on T skin 373, T env 375, and the values. Additionally or alternatively, the computer 380 may detect whether the user was sober, based on T skin 373, T env 375, and the values.

[0116] Various configurations of sensors may be utilized in embodiments described herein to measure various regions on the user’s face. In one embodiment, the first region at which T skin 373 are measured, covers a portion of skin that is not located above a portion of a branch of the external carotid artery. In another embodiment, the second head -mounted temperature sensor 374 is an outward-facing thermal camera configured to measure levels of radiation directed at the user’s head.

[0117] In still another embodiment, the first head-mounted temperature sensor 372 is a contact temperature sensor (e.g., a thermistor), the second region is larger than 3 cm 2 , and the camera 376 is not head-mounted. Optionally, in this embodiment, the camera 376 captures images 377 from a distance that is farther than 5 cm from the user’s head. Some examples of cameras that are not head -mounted that may be used in this embodiment include a smartphone camera, a laptop camera, a webcam, and a security camera.

[0118] In yet another embodiment, the first head-mounted temperature sensor 372 may include an inward-facing thermal camera located more than 2 mm away from the user’s head, the second region is larger than 2 cm 2 , and the camera 376 is an inward-facing head-mounted camera that captures the images 377 from a distance between 5 mm and 5 cm from the user’s head.

[0119] In one example, the first head-mounted temperature sensor 372 is physically coupled to the frame of the smartglasses 370, and the first region is located on the user’s nose or temple. In another example, the first head-mounted temperature sensor 372 is physically coupled to a helmet or goggles, and the first region is located on the user’s forehead, cheekbone, and/or temple.

[0120] In some embodiments, in which the camera 376 is an inward-facing head-mounted camera, the camera 376 may have the same properties as other inward -facing head-mounted cameras described herein, such as the camera 483. The images 377 may have various properties that are attributed herein to images taken with visible-light cameras, such as the properties attributed to the images 485. Optionally, the images 377 may be processed in various ways described herein for image processing, such as described above regarding images 485. In some embodiments, especially when the camera 376 is not head-mounted, various processing procedures known in the art, such as face tracking, image recognition, and/or facial landmark discovery may be utilized to process the images 377.

[0121] The computer 380 is configured, in some embodiments, to detect a certain condition (e.g., whether the user has a fever or whether the user is intoxicated) based on a T skin 373, T env 375, and the values indicative of hemoglobin concentrations at three or more regions on the user’s face, which are calculated based on the images 377.

[0122] Calculating the values indicative of hemoglobin concentrations at three or more regions on the user’s face may be done utilizing the various teachings described above. In some examples, the middles of the three or more regions are at least 3mm, 1 cm, or 3 cm apart from each other. Optionally, the values indicative of the hemoglobin concentrations at three or more regions on the user’s face may include values calculated based on different channels of the images 377. Optionally, the images 377 used to calculate the values are taken during a window in time, such as a window that is at most five second long, 30 seconds long, 5 minutes long, or one hour long.

[0123] As discussed above, the computer 380 may optionally calculate the values indicative of hemoglobin concentrations based on detecting facial flushing patterns in the images, and/or based on detecting iPPG signals in the images, and may utilize reflections of light at different wavelengths to account for thermal interference by the environment. For example, the images 377 include a first channel corresponding to wavelengths that are mostly below 580 nanometers and a second channel corresponding to wavelengths mostly above 580 nanometers, and the computer 380 calculates the values indicative of hemoglobin concentrations, which include the following: (i) first values derived based on the first channel in the images, and (ii) second values derived based on the second channel in the images. Having separate values for different wavelengths enables to account for interference from the environment when detecting whether the user has the fever because temperature interference from the environment is expected to affect the first values more than the second values. Optionally, the computer 380 also calculates a confidence in a detection of the fever, such that the confidence decreases as the difference between the first values and the second values increases.

[0124] Detection of whether the user has a fever and/or is intoxicated may involve utilization of machine learning approaches. In some embodiments, the computer 380 generates feature values based on T Skin 373, T env 375, and the values indicative of hemoglobin concentrations at the three or more regions on the user’s face (which are calculated based on the images 377). These feature values are utilized by the computer 380 in a machine learning -based approach to detect whether the user has a fever and/or whether the user is intoxicated. Optionally, the computer 380 utilizes a model 386 to calculate, based on the feature values, a value indicative of whether the user has fever.

[0125] There are various ways in which a hemoglobin concentration pattern may be calculated. Optionally, calculating a hemoglobin concentration pattern involves processing the images, for example, in order to accentuate the color of one or more channels in the images, and/or accentuate the changes to colors of one or more channels in the images (e.g., accentuating color changes caused by blood flow from cardiac pulses). Additionally or alternatively, calculating a hemoglobin pattern may involve calculating a representation of the pattern by assigning values to regions in the images and/or to a representation of regions on the face. Optionally, the values may represent extents, changes to extents, and/or temporal changes to extents, of one or more color channels at the different regions.

[0126] The following are some examples of processing methods that may be applied to images in order to calculate a hemoglobin concentration pattern based on images. In some embodiments, one or more of the processing methods may be applied by the computer before hemoglobin concentration patterns are used for the detection, and/or as part of the detection. The computer may use as part of preprocessing and/or calculation of hemoglobin concentration patterns Eulerian video magnification and/or accentuating the color of facial flushing in the images, as described above.

[0127] In one embodiment, calculating a hemoglobin concentration pattern may involve assigning values to regions on the face and/or in the images that are binary values. FIG. 7A illustrates an example of a hemoglobin concentration pattern of a sober person where certain regions have a value “0” because the color of the red channel in the certain regions is below a certain threshold, and other regions have a value “1” because the color of the red channel in the other regions is above the threshold. FIG. 7B illustrates an example of the hemoglobin concentration pattern of the same person when intoxicated, and as a result the face is redder in certain locations.

[0128] In another embodiment, calculating a hemoglobin concentration pattern may involve assigning values to regions on the face and/or in the images, which are continuous. Some examples of hemoglobin concentration patterns include a two-dimensional grid corresponding to average intensities and/or maximum intensities of colors from one or more channels, values obtained after extracting iPPG signals from the images, and/or averages of values of certain fiducial points extracted from the iPPG signals. [0129] In yet another embodiment, calculating a hemoglobin concentration pattern may involve assigning values to regions on the face and or in the images, which are a time series. For example, the pattern may include values in a two-dimensional grid, where each position in the gird is a time series that represents a shape of a PPG pulse wave at the location on the face corresponding to the position.

[0130] There are various types of feature values that may be generated by the computer 380 based on input data, which may be utilized to calculate a value indicative of whether the user has a fever and/or whether the user is intoxicated. Some examples of feature values include “raw” or minimally processed values based on the input data (i.e., the features are the data itself or applying generic preprocessing functions to the data). Other examples of feature values include feature values that are based on higher- level processing, such a feature values determined based on domain-knowledge.

[0131] Generating a baseline pattern of hemoglobin concentration can be done based on images previously taken by the camera 376. Optionally, the baseline images were taken previously while the user was not in a state being detected.

[0132] In one embodiment, the computer 380 calculates, based on additional baseline images captured with the cameras 376 while the user had a fever, a fever-baseline pattern comprising additional values indicative of hemoglobin concentrations at the three or more regions on the user’s face. The computer 380 then bases the detection of whether the user has the fever also on a deviation of the values indicative of hemoglobin concentration from the aforementioned fever -baseline pattern.

[0133] Similarly, the computer 380 calculates, based on additional baseline images captured with the cameras 376 while the user was intoxicated, an intoxication-baseline pattern comprising additional values indicative of hemoglobin concentrations. The computer 380 then bases the detection of whether the user is intoxicated also based on a deviation of the values indicative of hemoglobin concentration from the intoxication-baseline pattern.

[0134] Various types of feature values may be generated based on T skin 373 and T env 375. These feature values may include a temperature value itself, a difference between the temperature and a previously taken temperature, and/or a difference between the temperature and a baseline temperature.

[0135] In one non-limiting example, feature values generated by the computer 380 include pixel values from the images 377 during a certain window, as well as a value of T skin 373 and a value of T env 375 during the certain window. In another non-limiting example, feature values generated by the computer 380 include timings and intensities corresponding to fiducial points identified in iPPG signals extracted from the images 377, as well as a value of T skin 373 and a value of T env 375. In yet another non-limiting example, feature values generated by the computer 380 include a value of T skin 373, a value of T env , and the values indicative of hemoglobin concentrations at three or more regions on the user’s face generated based on the images 377.

[0136] Training the model 386 may involve utilization of various training algorithms known in the art (e.g., algorithms for training neural networks, and/or other approaches described herein). After the model 386 is trained, feature values may be generated for certain measurements of the user (e.g., T skin 373, T env 375, images 377, and the other optional inputs), for which the value of the corresponding label (e.g., whether the user has a fever and/or whether the user is intoxicated) is unknown. The computer 380 can utilize the model 386 to calculate a value indicative of whether the user has a fever and/or whether the user is intoxicated based on these feature values.

[0137] Smartglasses for detecting congestive heart failure.

[0138] Heart failure, also known as congestive heart failure (CHF), occurs when the heart muscle does not pump blood as well as needed. Certain conditions, such as narrowed arteries in the heart (coronary artery disease) or high blood pressure, gradually leave the heart too weak or too stiff to fill and pump efficiently. CHF is a growing public health problem that affects nearly 6.5 million individuals in the US, and 26 million individuals worldwide. CHF is known to cause more hospitalizations in people over 65 than pneumonia and heart attacks.

[0139] Currently, drug therapy is the mainstay of treatment for CHF. However, without proper monitoring, treatment of CHF relies primarily on crisis intervention. With proper monitoring, more proactive and preventative disease management approaches may be attempted, which may reduce the number of hospitalizations and improve patient outcomes. However, monitoring CHF patients often relies on scheduled visits to a hospital following up on a cardiac event, home monitoring visits by nurses, and patient's self-monitoring performed at home. Such monitoring may often be expensive or inconvenient. Thus, there is need for better CHF monitoring approaches that will enable convenient and more continuous monitoring of CHF patients in order to be able to deliver effective and timely interventions. [0140] This paragraph discloses claims that the Applicant may file in a divisional patent application. The order of the optional dependent claims below is not limiting, and the dependent claims may be arranged according to any order and multiple dependencies. 1. A system configured to calculate extent of congestive heart failure (CHF), comprising: smartglasses configured to be worn on a user’s head; an inward-facing camera, physically coupled to the smartglasses, configured to capture images of an area comprising skin on the user’s head; wherein the area is larger than 4 cm A 2, and the camera is mounted more than 5 mm away from the user’s head; a sensor, physically coupled to the smartglasses, configured to measure a signal indicative of a respiration rate of the user; and a computer configured to calculate the extent of CHF based on: a facial blood flow pattern recognizable in the images, and the respiration rate of the user recognizable in the signal. 2. The system of claim 1, wherein the images and the signal are measured over a period spanning multiple days, and the computer is further configured to identify an exacerbation of the CHF based on: a reduction in average facial blood flow recognizable in the images taken during the period, and an increase in an average respiration rate recognizable in the signal measured during the period. 3. The system of claim 1, further comprising: a head -mounted sensor configured to measure temperature of a region comprising skin on the user’s head (T skin ), and a head-mounted sensor configured to measure environmental temperature (T env ); and the computer is further configured to utilize T skin to compensate for effects of skin temperature on the facial blood flow pattern, and to utilize T env to compensate for effects of physiologic changes related to regulating the user’s body temperature on the facial blood flow pattern. 4. The system of any one of claims 1 to 3, wherein the images and the signal were measured while the user was at rest and prior to a period during which the user walked; wherein the computer is further configured to receive: (i) additional images, taken within ten minutes after the period with the camera, and (ii) an additional signal indicative of an additional respiration rate of the user, measured with the sensor within ten minutes after the period; and wherein the computer is further configured to calculate the extent of CHF based on: a difference between the facial blood flow pattern recognizable in the images and an additional facial blood flow pattern recognizable in the additional images, and a difference between the respiration rate recognizable in the signal and the additional respiration rate recognizable in the additional signal. 5. The system of claim 4, further comprising a movement sensor, physically coupled to the smartglasses, configured to measure movements of the user; wherein the computer is further configured to: calculate a number of steps performed by the user during the period, and to calculate the extent of CHF responsive to the number of steps exceeding a predetermined threshold greater than twenty steps. 6. The system of claim 4, wherein the computer is further configured to: calculate a value indicative of skin color at different times based on the additional images, and to calculate the extent of CHF based on a length of a duration following the period, in which the difference between the skin color and a baseline skin color, calculated based on the images, was above a threshold. 7. The system of claim 4, wherein the computer is further configured to: calculate a value indicative of skin color at different times based on the additional images, and to calculate the extent of CHF based on a rate of return of the user’s skin color to a baseline skin color calculated based on the images. 8. The system of claim 4, wherein the computer is further configured to: calculate respiration rates of the user at different times based on the additional signal, and to calculate the extent of CHF based on a length of a duration following the period, in which the difference between the respiration rate of the user and a baseline respiration rate, calculated based on the respiration rates, was above a threshold. 9. The system of any one of claims 1 to 3, wherein the computer is further configured to receive: (i) previous images of the area, which are indicative of a previous facial blood flow pattern while the user had a certain extent of CHF; and (ii) a previous respiration rate taken while the user had the certain extent of CHF; and wherein the computer is further configured to calculate the extent of CHF based on: a difference above a first predetermined threshold between the facial blood flow pattern and the previous facial blood flow pattern, and an increase above a second predetermined threshold in the respiration rate compared to the previous respiration rate. 10. The system of any one of claims 1 to 3, or 9, wherein the computer is further configured to calculate, based on the images, a value indicative of extent of color changes to skin in the area due to cardiac pulses, and to utilize a difference between the value and a baseline value for the extent of the color changes to calculate the extent of CHF; wherein the baseline value for the extent of the color changes was determined while the user experienced a certain baseline extent of CHF. [0141] Congestive heart failure (CHF) results in an inability of the heart to pump blood efficiently to meet the physiological demands of day-to-day activities. This causes manifestation of various signs related to the inability to adequately perform physical activities. These signs may include: shortness of breath and an increase in the respiration rate, an increased and/or irregular heart rate, a decrease in blood flow throughout the body which affects the patient’s facial blood flow pattern, and a decrease in physical activity. Detection of these signs, the severity of their manifestation, and/or the change to their severity compared to a baseline, can be used to calculate an extent to which a person is experiencing CHF.

[0142] Some embodiments of systems for calculating an extent of congestive heart failure (CHF) that a user is suffering from, and/or identify exacerbation of CHF, are described below. These systems rely on measuring signs of CHF that include changes to facial blood flow, respiration, and/or level of activity. Embodiments of these systems include: smartglasses with various sensors and/or cameras coupled thereto, and a computer that is used to calculate the extent of CHF and/or to detect the exacerbation of CHF.

[0143] FIG. 8 is a schematic illustration of components of a system configured to calculate an extent of CHF and/or identify exacerbation of CHF. In one embodiment, the system includes at least a pair of smartglasses (e.g., smartglasses 800 in FIG. 9), and an inward-facing camera 820 (such a camera from among cameras 802a, 802b, and 802c illustrated in FIG. 9). The system also includes computer 828, a sensor 822 to measure a signal indicative of respiration, and an optional movement sensor (inertial measurement unit, IMU 830), an optional skin temperature sensor 824, an optional environment temperature sensor 826, and/or an optional user interface 832.

[0144] The one or more inward-facing camera 820 captures images 821 of an area comprising skin on the user’s head. Optionally, the area comprises a portion of the lips of the user. Additional examples of the area are illustrated in FIG. 9, which describes various possibilities for positioning one or more inward facing cameras on smartglasses 800. Inward-facing camera 802a is located above the nose bridge and captures images of an area 803a on the user’s forehead. Inward-facing cameras 802b and 802c are located on the left and right sides of the smartglasses 800, respectively; they capture images that include areas 803b and 803c on the left and right sides of the user’s face, respectively.

[0145] Changes in respiration pattern (in particular an increased respiration rate) are often associated with CHF and its exacerbation. In order to incorporate information about respiration into detection of CHF, the sensor 822 measures respiration signal 823 that is indicative of the user’s respiration rate. Various types of sensors may be utilized for this purpose. One implementation of sensor 822 includes one or more microphones coupled to the smartglasses, and the respiration signal 823 is derived from the sounds recorded by the microphones. Various algorithmic approaches may be utilized to extract parameters related to respiration from an acoustic signal; some examples are provided in (i) Pramono, Renard Xaviero Adhi, Stuart Bowyer, and Esther Rodriguez- Villegas. "Automatic adventitious respiratory sound analysis: A systematic review." PloS one 12.5 (2017): e0177926, and, (ii) US patent No. 7850619, and (iii) US patent application No. 2019/0029563. In another embodiment, the sensor 822 may be an inward-facing head-mounted thermal camera, and the respiration signal 823 is derived from thermal measurements of a region below the nostrils, as described in US patents No. 10,130,308 and 10,136,856.

[0146] The computer 828 is configured to calculate the extent of CHF based on facial blood flow patterns recognizable the images 821 taken by the inward -facing camera 820, and based on the respiration rate recognizable in the respiration signal 823.

[0147] Herein, sentences of the form “a facial blood flow pattern recognizable in the images (of an area comprising skin on the user’s head)” refer to effects of blood volume changes due to pulse waves that may be extracted from one or more images of the area. These changes are identified and utilized by the computer, as explained above in relation to computers 780, 490 and 380. Additionally, sentences of the form “a respiration rate recognizable in the respiration signal” refer to values that may be extracted from the respiration signal, when algorithms known in the art are utilized to calculate the respiration rate from the respiration signal. It is to be noted that stating that a computer performs a calculation based on a certain value that is recognizable in certain data does not necessarily imply that the computer explicitly extracts the value from the data. For example, the computer may perform its calculation without explicitly calculating the respiration rate. Rather, data that reflects the respiration rate (the respiration signal) may be provided as input utilized by a machine learning algorithm. Many machine learning algorithms (e.g., neural networks) can utilize such an input without the need to explicitly calculate the value that is “recognizable”.

[0148] The computer 828 may produce different types of results, in embodiments described herein, based on input data that includes the images 821 and optionally other data, such as the respiration signal 823, T skin 825, T env 827, measurements of an IMU 830, and/or additional optional data described herein. Optionally, the results, which may include, for example, an indication of extent of CHF and/or an indication of the exacerbation, are reported via the user interface 832. Optionally, the results may be reported to a caregiver (e.g., a physician).

[0149] Calculating the extent of CHF may involve utilization of previous measurements for which a corresponding baseline extent of CHF the user experienced is known. In one embodiment, the computer 828 calculates, based on the images 821, a value indicative of an extent to which skin in the area is blue and/or gray. The computer 828 then utilizes a difference between the value and a baseline value for the extent to which skin in the area is blue and/or gray to calculate the extent of CHF. Optionally, the baseline value was determined while the user experienced a certain baseline extent of CHF.

[0150] In another embodiment, the computer 828 calculates, based on the images 821, a value indicative of an extent of color changes to skin in the area due to cardiac pulses. The computer 828 then utilizes a difference between the value and a baseline value for the extent of the color changes to calculate the extent of CHF. Optionally, the baseline value for the extent of the color changes was determined while the user experienced a certain baseline extent of CHF.

[0151] In still another embodiment, the computer 828 identifies an exacerbation of the CHF by comparing current measurements of the user, with previous measurements of the user. To this end, the computer 828 receives: (i) previous images of the area, which are indicative of a previous facial blood flow pattern while the user had a certain extent of CHF, and (ii) a previous respiration rate taken while the user had the certain extent of CHF. Optionally, the certain extent of CHF is provided from manual examination of the user, e.g., by a physician who examined the user at the time. Additionally or alternatively, the certain extent of CHF was calculated by the computer 828 based on data that includes the previous images and the previous respiration signal.

[0152] In one embodiment, the computer 828 may identify exacerbation of the CHF utilizing a machine learning based approach (as discussed above), in which feature values used to detect an exacerbation of CHF include a feature value is indicative a difference between the first facial blood flow pattern and the previous facial blood flow patter, and another feature value is indicative of an extent of increase in the respiration rate compared to the previous respiration rate. Optionally, a label may be provided manually by the user and/or by a medical professional who examined the user, and/or may be extracted from electronic health records of the user.

[0153] Identifying an exacerbation of CHF may involve, in some embodiments, monitoring the user over a period of multiple days, weeks, and even months or more. Thus, the images 821 and the images and the respiration signal 823 may be measured over a period spanning multiple days. In these embodiments, the computer 828 may identify the exacerbation of the CHF based on a reduction in average facial blood flow recognizable in the images 821 taken during the period and an increase in the average respiration rate recognizable in measurements of the respiration signal 823 measured during the period.

[0154] In one non-limiting example, feature values generated by the computer 828 include pixel values from the images 821 and values obtained by binning according to filterbank energy coefficients, using MFCC transform on results of FFT of the respiration signal 823, which includes audio.

[0155] In another non-limiting example, feature values generated by the computer 828 include timings and intensities corresponding to fiducial points identified in iPPG signals extracted from the images 821, and values of respiration parameters calculated based on the respiration signal 823.

[0156] Training the model utilized to calculate the extent of CHF and/or identify an exacerbation of CHF may involve utilization of various training algorithms known in the art (e.g., algorithms for training neural networks, and/or other approaches described herein). After the model is trained, feature values may be generated for certain measurements of the user (e.g., the images 821, respiration signal 823, etc.), for which the value of the corresponding label (e.g., the extent of CHF and or indicator of whether there is an exacerbation) is unknown. The computer 828 can utilize the model to calculate the extent of CHF, and/or whether there is an exacerbation of CHF, based on these feature values.

[0157] In a similar manner to the types of calculations related to the model 779, model 779 may involve a decision tree, a regression model, a neural network, and/or deep learning.

[0158] Detecting respiratory tract infection based on changes in coughing sounds. [0159] Coughing often reflects respiratory irritation or illness and can also occur as a symptom of a respiratory tract infection (RTI) and various diseases including asthma, cystic fibrosis, chronic obstructive pulmonary disease (COPD), flu, or VOVID-19. Typically, the more sever an RTI the more intense the coughing becomes. Thus, analysis of the sounds of coughing can help in detecting changes in the severity of a disease such as the flu or COVID-19.

[0160] However, there are many challenges when it comes to monitoring and analyzing coughing in day-to-day settings. In particular, it can be difficult to obtain a consistent auditory signal that enables accurate analysis of the sound in order to detect changes in intensity and spectral properties. One difficulty that needs to be overcome stems from the fact that microphones used to record the sound do not remain at the same distance and/or orientation with respect to the sound source (the mouth). Thus, it can be difficult to ascertain an extent to which a person’s coughing intensifies and/or how spectral properties of the coughing sounds change. Another difficulty in analysis and detection of coughs stems from background noises in the person’s vicinity. Background noises can often interfere with the audio signal, making it difficult to separate the effects of other non-coughing noises on the audio signal. At times, background noise may also include other people in the vicinity who may be coughing, which can confound algorithms used to analyze audio.

[0161] Thus, there is a need for an ambulatory system to monitor coughing, which can capture audio in real-world settings, in order to accurately assess extents of coughing to help determine the severity of an RTI.

[0162] This paragraph discloses claims that the Applicant may file in a divisional patent application. The order of the optional dependent claims below is not limiting, and the dependent claims may be arranged according to any order and multiple dependencies. 1. A system configured to detect a respiratory tract infection (RTI) based on changes in coughing sounds, comprising: a wearable ambulatory system comprising the following sensors: (i) a head -mounted acoustic sensor configured to be mounted at a fixed position relative to a user’s head, and to take audio recordings comprising coughing sounds of the user; and (ii) a head-mounted movement sensor configured to measure a signal indicative of movements of the user’s head (movement signal); and a computer configured to: (i) receive current measurements of the user, taken with the sensors while the movement signal was indicative of head movements that characterize coughing; (ii) receive earlier measurements of the user, taken with the sensors at least four hours before the current measurements, while the user had a known extent of the RTI and while the movement signal was indicative of head movements that characterize coughing; and (iii) detect a change relative to the known extent of the RTI based on a difference between the current measurements and the earlier measurements. 2. The system of claim 1, further comprising a head -mounted temperature sensor configured to measure temperature of the environment (T env ); wherein the computer is further configured to select the earlier measurements such that a difference between a T env , measured while the earlier measurements were taken, and a T env , measured while the current measurements were taken is below a predetermined threshold smaller than 7° C. 3. The system of claim 1, wherein the movement signal is further indicative of movements of the user’s body; wherein the computer is further configured to: calculate, based on the movement signal in the current measurements, a current level of physical activity that belongs to a set comprising: being stationary, and walking; and select the earlier measurements such that an earlier level of physical activity calculated based on the movement signal belonging to the earlier measurements, is the same as the current level of physical activity. 4. The system of any one of claims 1 to 3, wherein the computer is further configured to generate feature values based on data comprising the current measurements and the earlier measurements, and to utilize a machine learning -trained model to calculate, based on the feature values, a value indicative of the change relative to the known extent of the RTI; and wherein the machine learning-trained model was trained on training data of multiple users, with training data of each certain user, from among the multiple users, comprising certain first and second measurements taken with the sensors while the certain user had certain first and second known extents of the RTI, respectively. 5. The system of claim 4, further comprising a head -mounted temperature sensor configured to measure temperature of a region of skin on the user’s head (T skin ); wherein the computer is further configured to generate one or more of the feature values based on first and second values of T skin measured while the earlier measurements and the current measurements were taken, respectively, and to utilize the one or more of the feature values to calculate the value indicative of the change relative to the known extent of the RTI; whereby the one or more of the feature values are indicative of a change to T skin between when the earlier measurements and the current measurements were taken; and further comprising a second head-mounted acoustic sensor configured to be mounted at a second fixed position relative to the user’s head, and to take second audio recordings; wherein the distance between the head-mounted acoustic sensor and the second head-mounted acoustic sensor is greater than 1 cm; wherein the computer is configured to apply a beamforming technique to the audio recordings and the second audio recordings, to enhance a signal in which the coughing sounds of the user are recognizable. 6. The system of claim 4, further comprising a head-mounted photoplethysmograph device (PPG device) configured to measure a signal indicative of an oxygen saturation level of the user’s blood (Sp0 2 ); wherein the computer is further configured to generate one or more of the feature values based on first and second values of Sp0 2 measured while the earlier measurements and the current measurements were taken, respectively, and to utilize the one or more feature values to calculate the value indicative of the change relative to the known extent of the RTI; whereby the one or more feature values are indicative of a change to Sp0 2 between when the earlier measurements and the current measurements were taken. 7. The system of claim 4, further comprising a head-mounted temperature sensor configured to measure temperature of the environment (T env ); wherein the computer is further configured to generate one or more of the feature values based on first and second values of T env measured while the earlier measurements and the current measurements were taken, respectively, and to utilize the one or more of the feature values to calculate the value indicative of the change relative to the known extent of the RTI; whereby the one or more of the feature values are indicative of a change to T env between when the earlier measurements and the current measurements were taken. 8. The system of claim 4, further comprising a sensor configured to measure a signal indicative of a heart rate of the user (SigmO; wherein the computer is further configured to generate one or more of the feature values based on first and second Sigm measured while the earlier measurements and the current measurements were taken, respectively, and to utilize the one or more of the feature values to calculate the value indicative of the change relative to the known extent of the RTI; whereby the one or more of the feature values are indicative of a change to the heart rate of the user between when the earlier measurements and the current measurements were taken. 9. The system of claim 4, wherein the computer is further configured to utilize the movement signal to select first and second portions of the audio recordings belonging to the earlier measurements and the current measurements, respectively; wherein the first and second portions are selected to correspond to periods in which the movement signal did not indicate head movements that are characteristic of coughing; and wherein the computer is further configured to: calculate based on the first and second portions, first and second respective respiration rates of the user; generate, based on the first and second respiration rates, one or more of the feature values, and to utilize the one or more of the feature values to calculate the value indicative of the change relative to the known extent of the RTI; whereby the one or more of the feature values are indicative of a change to the user’s respiration rate between when the earlier measurements and the current measurements were taken. 10. The system of claim 4, wherein the movement signal is further indicative of an orientation of the user’s head relative to the direction in which earth’s gravity acts; and wherein the computer is further configured to: identify first and second portions of the audio recordings in the earlier measurements and the current measurements, respectively, which were recorded when the user’s head was upright; calculate first and second extents of coughing based on the first and second portions, respectively; generate one or more of the feature values based on the first and seconds extents of coughing; and utilize the one or more feature values to calculate the value indicative of the change relative to the known extent of the RTI; whereby the one or more feature values are indicative of a change in extents of coughing while the head is upright, between when the earlier measurements were taken and when the current measurements were taken. 11. The system of claim 4, wherein the movement signal is further indicative of movements of the user’s body; and wherein the computer is further configured to: identify first and second portions of the audio recordings in the earlier measurements and the current measurements, respectively, which were recorded after the user was stationary for at least a certain period, which is greater than one minute; calculate first and second extents of coughing based on the first and second portions; and utilize the one or more feature values to calculate the value indicative of the change relative to the known extent of the RTI; whereby the one or more feature values are indicative of a change in extents of coughing while being stationary, between when the earlier measurements were taken and when the current measurements were taken.

[0163] In one embodiment, changes to extent of a respiratory tract infection (RTI) are detected based on monitoring coughing, and the systems may provide an early warning of infection with a disease like Covid-19, as well as indications of its progression and severity. The system includes a wearable ambulatory system, such as smartglasses or a smart -helmet, with at least the following sensors: (i) a head- mounted acoustic sensor configured to be mounted at a fixed position relative to a user’s head, and to take audio recordings comprising coughing sounds of the user, and (ii) a head-mounted movement sensor configured to measure a signal indicative of movements of the user’s head (“movement signal”). Such hardware enables continuous and comfortable monitoring of a user in order to identify to what extent a user coughs, and detect changes in coughing which may be indicative of changes in the extents of an RTI. The detection may also be based on additional signals, such as the respiration rate, skin temperature, which can also be monitored using head-mounted sensors.

[0164] Analysis of sensor data in order to detect coughing, as well to detect changes relative to the known extent of the RTI, are performed using a computer. In some embodiments, the computer receives current measurements of the user, taken with the sensors while the movement signal was indicative of head movements that characterize coughing. Additionally, the computer receives earlier measurements of the user, taken with the sensors at least four hours before the current measurements, while the user had a known extent of the RTI and while the movement signal was indicative of head movements that characterize coughing. The computer calculates a change with respect to the extent of the RTI based on a difference between the current measurements and the earlier measurements.

[0165] Since coughing typically involves making both characteristic sounds and characteristic movements (e.g., head jerking), using measurements from different modalities (acoustic and movements) can help improve the identification of coughs by the user, and avoid misidentification due to noise in the user’s vicinity.

[0166] In some embodiments, coughing is identified using multiple acoustic sensors that are coupled to the wearable ambulatory system and mounted at fixed positions relative to the head of the user wearing the ambulatory wearable device. This may confer several advantages when it comes to reproducing consistent audio recordings of the user. In particular, having the acoustic sensors mounted at fixed positions makes it possible to know the exact distances of the sound sources (e.g., the mouth and nostrils) from the microphones, and by that enhance the audio analysis using known beamforming techniques. [0167] FIG. 10 is a schematic illustration of components of a system that utilizes smartglasses or a smart-helmet to monitor a user’s respiration and/or coughing. The system includes one or more acoustic sensors 202, mounted to fixed positions relative to the head of the user wearing the ambulatory wearable device, and a movement sensor 206. The movement sensor 206, which may be an IMU, measures a signal indicative of movements of the head of user 260, and/or an orientation of the head of the user 260 with respect to the earth’s gravity. The ambulatory wearable device may also include additional sensors, such as a skin temperature sensor 208, an environment temperature sensor 210, a PPG device 212, and an inward-facing camera 218. The system also includes computer 200, which may perform at least some of the calculations involved in analysis of measurements taken with the various sensors described above, as well provide results of these calculations and/or interact with a user via user interface 220.

[0168] FIG. 11 illustrates an embodiment of the system illustrated in FIG. 10, which may be utilized to make detections regarding a change to the extent of an RTI relative to a known extent of the RTI, and/or whether a user exhibits early signs of an RTI. The system illustrated in FIG. 11 includes smartglasses 264 having various sensors for taking measurements of the user 260. Optionally, these measurements include values measured during different periods of time. Recently taken measurements, e.g., measurements taken during the preceding minutes or hours, may be considered “current measurements” (denoted as current measurements 262 in FIG. 11). Previously taken measurements, such as measurements taken at least 4 hours before the current measurements 262, or on preceding days, are considered “earlier measurements” or “baseline measurements” (denoted earlier measurements 261).

[0169] The computer 265 receives measurements, taken with smartglasses 264, and utilizes them to detect whether the user 260 user exhibits early signs of an RTI, and/or to detect a change to the extent of the RTI relative to a previous known extent of the RTI by comparing the current measurements to earlier measurements.

[0170] When the earlier measurements 261 were taken during a previous period when the user 260 did not exhibit early signs of an RTI, these earlier measurements 261 may serve as baseline measurements for a healthy state (or a non-RTI state) of the user 260.

[0171] When a person is not upright, in some cases, this may influence respiration sounds and/or increase the extent of coughing (e.g., because of movement of fluids in the lungs). Thus, it can be important to receive an indication of the angle of the head and perform calculations when the person is in a consistent state, such as being upright. The movement sensor 206 may also provide a signal indicative of an orientation of the user’s head relative to gravity, and the detection of the early signs of the RTI may consider the orientation.

[0172] The known extent of the RTI may be provided by a medical professional who examines a patient and/or by a self-report of the user or caregiver after being prompted to do so. Optionally, the querying of the user is done in response to analysis of measurements in which there are borderline signs. [0173] To detect the RTI the computer 265 may utilize a machine learning-based approach that requires generating feature values based on data that includes the current measurements 262 and the earlier measurements 261, and then use a machine learning -trained model to calculate, based on the feature values, a value indicative of the change relative to the known extent of the RTI. Various known computational approaches may be utilized to train the model. Optionally, the model includes parameters of at least one of the following types of models: a regression model, a neural network, a nearest neighbor model, a support vector machine, a support vector machine for regression, a naive Bayes model, a Bayes network, and a decision tree.

[0174] Feature values generated from measurements of the movement sensor 206 may also include values of a respiration parameter (e.g., the respiration rate) derived from movement data, or feature values known in the art that may be used to determine respiration from movement data. Some examples of approaches known in the art that may be utilized for this purpose, such as Hernandez, Javier, et al. "Cardiac and respiratory parameter estimation using head-mounted motion-sensitive sensors" (2015). [0175] In one embodiment, the computer 265 may utilize the head movement signal to select first and second portions of the audio recordings belonging to the earlier measurements 261 and the current measurements 262, respectively. Optionally, the first and second portions are selected to correspond to periods in which the head movement signal did not indicate head movements that are characteristic of coughing. In one example, the computer 265 calculates based on the first and second portions, first and second respective respiration rates of the user, and generates, based on the first and second respiration rates, one or more of the feature values, which are utilized to calculate the value indicative of the change relative to the known extent of the RTI.

[0176] In yet another embodiment, values obtained by the movement sensor 206 may be indicative of movements of the user’s body and provide information indicative of whether the user was stationary or active (e.g., walking, running, etc.). Optionally, the computer 265 utilizes these values to identify first and second portions of the audio recordings in the earlier measurements 261 and the current measurements 262, respectively, which were recorded after the user was stationary for at least a certain period, which is greater than one minute. The computer 265 calculates first and second extents of coughing based on these first and second portions, and generates, based on the first and second extents of coughing, one or more of the feature values, which are utilized to calculate the value indicative of the change relative to the known extent of the RTI.

[0177] In one embodiment, the feature values generated based on the current measurements 262 and the earlier measurements 261 include feature values indicative of the following: an average breathing rate during the earlier period (in which the earlier measurements 261 were taken), an average breathing rate during the current period (in which the current measurements 262 were taken), an average number of coughs per hour during the earlier period, and an average number of coughs per hour during the current period. The feature values also include a value representing the known extent of the RTI the user 260 had during the previous period. Optionally, the feature values may also include the average skin temperatures (as measured by the skin temperature sensor 208) during the previous period and the current period, and the average environment temperatures (as measured by the environment temperature sensor 210) during the previous period and the current period.

[0178] Coughing generates a distinctive auditory signal. This enables not only identification of coughing events, but also identification of different types of coughs due to their different spectral properties, and there has been extensive work in the area of identifying and/or characterizing coughs from audio data, such as Rudraraju, Gowrisree, et al. "Cough sound analysis and objective correlation with spirometry and clinical diagnosis." Informatics in Medicine Unlocked (2020): 100319.

[0179] In some embodiments, the earlier measurements 261 may be selected in such a way such that user 260 was in a condition (while the earlier measurements 261 were taken) that is similar to the condition the user 260 was in when the current measurements 262 were taken. Being in “a similar condition” may mean different things in different embodiments. For example, the computer 265 may select the earlier measurements 261 such that a difference between the temperature in the environment, measured while the earlier measurements 261 were taken, and a temperature in the environment, measured while the current measurements 262 were taken, is below a predetermined threshold, such as below 7° C. In another example, the computer 265 calculates, based on measurements of the movement sensor 206 belonging to the current measurements 262, a current level of physical activity that belongs to a set comprising: being stationary, and walking; and selects the earlier measurements 261 that were taken while the user’s movements were indicative of a similar level of physical activity.

[0180] Adjustable smartglasses with bendable temples.

[0181] Eyeglasses have been used for many years, and materials for eyeglasses frames are well-known in the art, such as plastic frame materials and metal frame materials. Examples of well-known materials used in plastic eyeglasses frames include cellulose acetate, cellulose acetate propionate, and blended nylon. Examples of well-known materials used in metal eyeglasses frames include titanium, Monel, beryllium, stainless steel, Flexon, and aluminum.

[0182] Many of the well-known materials for eyeglasses frames enable someone fitting eyeglasses, such as an optician, to bend the frame around the human ear to fit the frame by applying pressure, sometime after heating the frame material with an eyeglass frame heater, also known as eyeglass frame warmer or blow dryer (all referred to as a blower). However, all prior art smartglasses frames having electronic components both in front of and behind the ear do not enable the optician to bend the frame around the ear when fitting, because such bending can harm the electronic components that are not designed to be bent. Therefore, there is a need for a new mechanical design for smartglasses that will enable the optician to bend the smartglasses frame around the ear for fitting.

[0183] This paragraph discloses claims that the Applicant may file in a divisional patent application. The order of the optional dependent claims below is not limiting, and the dependent claims may be arranged according to any order and multiple dependencies. 1. Smartglasses comprising: a front element configured to support lenses; two temples coupled to the front element; each temple comprising: a first portion, coupled to the front element, comprising first electronic components; a second portion, coupled to the first portion, comprising electric wires; a third portion, coupled to the second portion, comprising second electronic components; wherein the second portion is designed to be bent around a human ear to improve the smartglasses’ fit, and the first and third portions are not designed to be bent to improve the smartglasses’ fit. 2. The smartglasses of claim 1, wherein the first and third portions, which are not designed to be bent, are stiffer than the second portion that is designed to be bent to improve the smartglasses’ fit. 3. The smartglasses of claim 1, wherein the outer protective shells of the first and second portions are made of different materials. 4. The smartglasses of claim 1, wherein the outer protective shells of the second and third portions are made of different materials. 5. The smartglasses of claim 1, wherein the outer protective shells of the two temples are made of material comprising at least one of: cellulose acetate, cellulose acetate propionate, and blended nylon. 6. The smartglasses of claim 1, wherein the outer protective shells of the two temples are made of a material designed to be bent after being warmed, and the outer protective shells in the second portions are thinner than the outer protective shells in the first and third portions; whereby the thinner protective shells in the second portions makes it more easy to bend the second portions than to bend the first and third portions. 7. The smartglasses of claim 1, wherein the outer protective shells of the two temples are made of a material designed to be bent after being warmed, and the first and third sections also comprise strengthening bars to prevent accidental bending, while the second portions do not include the strengthening bars. 8. The smartglasses of any one of claims 1 to 7, wherein the second electronic components comprise a battery. 9. The smartglasses of claim 8, wherein the second portion is configured to be heated before being bent, and the battery is protected by a thermal insulation configured to reduce thermal conduction from the second portion to the battery stored in the third portion. 10. The smartglasses of claim 9, wherein the thermal insulation covers the side of the battery located closer to the second portion, and does not cover the opposite side of the battery that is located towards the end of the frame. 11. The smartglasses of any one of claims 1 to 7, wherein the most heat-sensitive electrical component stored in the second portion is less sensitive to heat than the most heat-sensitive electrical components stored in the first and third portions. 12. The smartglasses of any one of claims 1 to 7, wherein the second portion is further designed to be bent inwards, towards the skull of a person wearing the smartglasses, to improve the smartglasses’ fit. 13. The smartglasses of any one of claims 1 to 7, wherein the second portion is marked by markings showing the boundaries of where it is safe to apply pressure to bend the second portion for fitting the smartglasses for its wearer. 14. The smartglasses of any one of claims 1 to 7, wherein the process of bending the second portion comprises heating the second portion with a blower, and further comprises placing a thermal insulation around at least a portion of the first portion and/or second portion in order to protect it from the hot air produced by the blower.

[0184] FIG. 12 illustrates one embodiment of smartglasses 29 with a portion designed to be bent around a human ear to improve the smartglasses’ fit to the wearer. The smartglasses 29 include a front element 30 configured to support lenses 31; two temples coupled to the front element; each temple comprising: (i) a first portion 32, coupled to the front element 30, comprising first electronic components (35, 36), (ii) a second portion 33, coupled to the first portion 32, comprising electric wires 37, and (iii) a third portion 34, coupled to the second portion 33, comprising second electronic components 38. Although FIG. 12 illustrates the reference numerals only on the left side of the frame, the right side of the frame is similar to the left temple in this specific example. In other examples the right and left sides of the frame may include different components.

[0185] The first portion 32 and the third portion 34 include electronic components (35, 36, 38) that are not designed to be bent by an optician to fit the smartglasses 29 to the wearer, while the second portion 33 includes electric wires 37, and optionally additional electronic components, which are designed to be bent around the ear to improve the smartglasses’ fit.

[0186] In one embodiment, the second portion is configured to be heated before being bent, and the first and/or third portions are more sensitive to heat than the second portion. In this case, the first and/or third portions include an optional thermal insulation configured to reduce thermal conduction from the second portion to the first and/or second electronic components stored in the first and/or third portion, respectively. Optional thermal insulation 40, also known as heat insulation, protects the battery 38 by reducing thermal conduction from the second portion to the battery stored in the third portion. Optionally, the thermal insulation 40 does not cover the opposite side of the battery 38 in order to enable heat dissipation from the battery.

[0187] FIG. 13 illustrates one embodiment of smartglasses 50 that include a battery 60 behind the ear, and various sensors before the ear, such as a microphone 51, an outward -facing camera 52, a contact PPG sensor 54, an inward-facing camera 55, a downward-facing camera 56, a microphone 57, a thermal camera 58, and an inertial measurement unit 59. The smartglasses 50 also include markings 61 showing the boundaries of where to apply the pressure while bending the frame. These markings 61 help the optician to ensure he/she does not harm the smartglasses when adjusting the frame to better fit the wearer. [0188] It is noted that the temples may include more than three portions without limiting the scope of the embodiment, and sentences in the form of “portion X coupled to potion Y” also cover indirect connections between the portions, which means that the temple may include additional portion/portions between the first, second and third portions.