Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
MONITORING DEVICE, METHOD AND COMPUTER PROGRAM FOR DETECTING A MOTION IN AN ENVIRONMENT
Document Type and Number:
WIPO Patent Application WO/2024/057323
Kind Code:
A1
Abstract:
A monitoring device and associated system, method of operation and computer program product, the monitoring device comprising a motion detector for detecting a motion in an environment, the monitoring device being associated with an arming state, the arming state being associated with a security function, the motion detector being configured to detect motion in the environment in both an armed state and an unarmed state, wherein detection of motion is dependent on one or more parameters of the device, and at least one of the parameters is configured to provide a higher acuity in detecting motion in an unarmed state than in an armed state.

Inventors:
AMIR HAIM (IL)
AMIR OHAD (IL)
SCHNAPP JONATHAN MARK (IL)
Application Number:
PCT/IL2023/051006
Publication Date:
March 21, 2024
Filing Date:
September 14, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ESSENCE SECURITY INTERNATIONAL ESI LTD (IL)
International Classes:
G08B13/189; G08B21/04; G08B29/20
Domestic Patent References:
WO2019236440A12019-12-12
Foreign References:
US20200294382A12020-09-17
US20210280029A12021-09-09
US5764146A1998-06-09
Attorney, Agent or Firm:
EHRLICH, Gal et al. (IL)
Download PDF:
Claims:
WHAT IS CLAIMED IS:

1. A monitoring device comprising a motion detector for detecting a motion in an environment, the monitoring device being associated with an arming state, the arming state being associated with a security function, the motion detector being configured to detect motion in the environment in both an armed state and an unarmed state, wherein detection of motion is dependent on one or more parameters of the device, and at least one of the parameters is configured to provide a higher acuity in detecting motion in an unarmed state than in an armed state.

2. The monitoring device of claim 1, wherein the higher acuity in detecting motion comprises one or more of: detecting motions that travel a smaller distance, detecting slower motions, and/or detecting faster motions, in the unarmed state than in the armed state.

3. The monitoring device of claim 1 , wherein the at least one of the parameters configured to provide a higher acuity comprises one or more of: a gain, a threshold level, a detected frequency range, a number of threshold crossings, within a predefined time window, required to trigger a detection of motion, a frequency-gain profile, and/or transfer function.

4. The monitoring device of any preceding claim, wherein the motion detector comprises a passive infra-red (PIR) motion detector.

5. The monitoring device of claim 4, wherein the at least one of the parameters comprises one or both of: a detection threshold of the PIR detector and/or a gain of the PIR detector, wherein the detection threshold is lower and/or the gain is higher in the unarmed state than in the armed state. The monitoring device of any preceding claim, wherein the at least one of the parameters comprises or affects a minimum angle across a field of view that is to be traversed by an object to cause a motion detection, wherein the minimum angle is less in the unarmed state than in the armed state. The monitoring device of any preceding claim, wherein the at least one parameter comprises a minimum number of times a threshold of a comparator stage of the motion detector is crossed in order to detect motion, wherein the minimum number of times is less in the unarmed state than in the armed state. The monitoring device of any preceding claim, wherein the motion detector is configured to enable detection of smaller movements in the unarmed state than can be detected in the armed state. The monitoring device of claim 4 or any claim dependent thereon, wherein the at least one of the parameters determines or affects a transfer function or gain vs frequency profile of the PIR detector, and wherein in the unarmed state the motion detector is more sensitive to detecting motion than in the armed state for at least one range of frequencies. The monitoring device of any preceding claim, wherein the motion detector is configured to be able to detect motion in the unarmed state in at least one range of frequencies for which the motion detector is unable to detect motion in the armed state and/or the motion detector is configured to have a wider frequency range in the unarmed state than in the armed state. The monitoring device of any preceding claim, wherein the motion detector has a frequency response by which the motion detector can detect motion corresponding to an extended range of frequencies in the unarmed state compared to the armed state. The monitoring device of any preceding claim, wherein, when in the armed state, the monitoring device is configured for intruder detection, and when in the unarmed state is configured to perform a different function that comprises monitoring the environment for motion. The monitoring device of any preceding claim, wherein when in the armed state, the monitoring device operates at least one module of the monitoring device that is not the motion detector to capture data from a threat-verification field of view spanning the field of view of the motion detector to enable threat verification, said operating of the at least one module being directly responsive to the detection of motion by the motion detector. The monitoring device of any preceding claim, comprising a sound transducer and a sound processing module for processing an output of the sound transducer wherein, when the monitoring device is associated with the unarmed state, the monitoring device is configured to perform a process in which: the sound processing module is triggered to operate in response to motion detected by the motion detector; and at least one action is performed in response to the sound processing module determining that an audio trigger is represented in the output of the sound transducer. The monitoring device according to claim 14, wherein, the monitoring device is configured to perform the at least one action responsive to detection of both motion by the motion detector when the monitoring device is in the unarmed state and the subsequent determination that the audio trigger is represented in the output of the sound transducer. The monitoring device of claim 14 or claim 15, comprising at least one image capturing unit, and wherein the at least one action comprises triggering the at least one image capturing unit to capture at least one image. The monitoring device of any of claims 14 to 16, wherein the audio trigger is an audio trigger indicative of peril. A system comprising the monitoring device of any preceding claim and a controller, the controller having arming state awareness in which the controller is configured to know when the security system is in the armed state or unarmed state. The system of claim 18, wherein the controller is configured to: when the system is in the armed state, signal the monitoring device to operate at least one module that is not the motion detector to capture data from a verification field of view spanning the field of view of the motion detector to enable threat verification, said operating of the at least one component being directly responsive to the detection of motion by the motion detector; and when the system is in the unarmed state, signal the monitoring device to perform at least one action responsive to detection of both motion by the motion detector when the monitoring device is associated with the unarmed state and the detection of a subsequent audio trigger. The system of claim 19, comprising at least one image capturing unit, and wherein the at least one action comprises triggering the at least one image capturing unit to capture at least one image. The system of claim 20, wherein the system comprises at least one other image capturing unit; and the monitoring device is configured to transmit a notification responsive to determining that the audio trigger is present in the output of the sound transducer, the notification being a notification for the at least one other image capturing unit to collect at least one image. A method of operation of a monitoring device, the monitoring device comprising a motion detector for detecting a motion in an environment, wherein detection of motion is dependent on one or more parameters of the monitoring device, the monitoring device being associated with an arming state, the arming state being associated with a security function, wherein the method comprises: operating the monitoring device to detect motion in the environment in both an armed state and an unarmed state, wherein at least one of the parameters is configured to provide a higher acuity in detecting motion in the unarmed state than in the armed state. The method of claim 22 comprising identifying whether the monitoring device is in the armed state or unarmed state and setting a value of the at least one of the parameters to provide a higher acuity in detecting motion when it is identified that the monitoring device is in the unarmed state than when it is identified that the monitoring device is in the armed state. A computer program embodied on a non-transient, tangible computer readable carrier medium, the computer being configured such that when implemented by a processor of a monitoring device that comprises a motion detector for detecting a motion in an environment, wherein detection of motion is in dependent on one or more parameters of the monitoring device, the monitoring device being associated with an arming state, the arming state being associated with a security function, the computer program causes the processor to cause the monitoring device to operate such that: the monitoring device detects motion in the environment in both an armed state and an unarmed state; and at least one of the parameters is configured to provide a higher acuity in detecting motion in the unarmed state than in the armed state.

Description:
ARMING STATE AWARE MONITORING DEVICE

RELATED APPLICATION/S

This application claims the benefit of priority of GB Patent Application No. 2213548.7, filed on September 15, 2022, the contents of which are incorporated herein by reference in their entirety.

FIELD

The present disclosure relates to a monitoring device that can be switched between an armed state and a disarmed state and associated system, method of operation and computer program product that implements the method.

BACKGROUND

Monitoring systems for monitoring a premises are widely available. Examples of such systems includes security systems, smart home automation systems, health monitoring systems, and/or the like, but are not limited to these. For example, some monitoring systems may comprise a device comprising a motion detector for detecting motion at the premises and wherein the system is configured to take an action in response to the detected motion, such as performing a follow-up sensor-based measurement (e.g. to gather more information associated with an event that caused a detected motion), raising an alarm, taking an intervention action (e.g. a deterrent action), and/or the like.

For security systems the action in response to the detected motion may be conditional upon the security system being in an armed state. The security system can be switchable between an unarmed state, in which the system does not take the action in response to the detected motion, and an armed state in which the system does take the action, at least partly in response to the detected motion, and optionally also to one or more additional conditions being met.

SUMMARY

Various aspects of the present disclosure are defined in the independent claims. Some preferred features are defined in the dependent claims.

According to a first example of the present disclosure is a monitoring device comprising: a motion detector for detecting a motion in an environment, wherein detection of motion is in dependent on one or more parameters of the monitoring device, the monitoring device being associated with an arming state, the arming state being associated with a security function, wherein: the motion detector is configured to detect motion in the environment in both an armed state and an unarmed state; and at least one of the parameters is configured to provide a higher acuity in detecting motion in the unarmed state than in the armed state.

The motion detector may be configured to have a higher acuity to detecting motion in the unarmed state than in the armed state. That is, at least one, some or all of the one or more parameters of the device may be configured to provide a higher acuity in detecting motion in the unarmed state than in the armed state. The higher acuity may optionally relate to a specific range or condition of operation, for example acuity in detecting small (short-distance) movements or slow movements.

The higher acuity in detecting motion may comprise one or more of: detecting motions that travel a smaller distance in the unarmed state than in the armed state, detecting slower motions in the unarmed state than in the armed state, and/or detecting faster motions in the unarmed state than in the armed state.

The monitoring device may be part of a system, such as a security system or another system that implements security functionality, for example. The arming state may be an arming state of the system. The arming state may be associated with the security function of the system.

The monitoring device may be configured to perform a function that comprises detecting motion in the environment in the armed state and may be configured to not perform that function and optionally to perform a different function that comprises detecting motion in the environment in the unarmed state. The function that comprises detecting motion in the environment in the armed state may comprise a security function, e.g. threat detection (e.g. intruder detection), and/or an intervention action (e.g. an intruder deterrent action). The different function that comprises detecting motion in the environment in the unarmed state may comprise detecting a state and/or activity of an occupant. When the monitoring device is associated with the unarmed state, such an occupant would likely be a legitimate occupant. The different function that comprises detecting motion in the environment in the unarmed state may comprise detecting peril, e.g. when the occupant is in or at risk of being in a perilous situation. For example, when in the armed state, the monitoring device may operate at least one module of the monitoring device that is not the motion detector to capture data from a threat-verification field of view spanning the field of view of the motion detector to enable threat verification, wherein said operating of the at least one module may be directly responsive to the detection of motion by the motion detector. In some embodiments, when associated with the unarmed state, the monitoring device may be configured to perform at least one action responsive to detection of both motion by the motion detector when the monitoring device is associated with the unarmed state and the detection of a subsequent audio trigger.

Providing a monitoring device having an arming state associated with at least the motion detector in which at least one of the parameters of the monitoring device upon which detection of motion depends is configured to provide a higher acuity in detecting motion in the unarmed state than in the armed state may seem counter intuitive. However, when the monitoring device is being used for certain purposes, such as but not limited to intruder or other threat detection, then some limitation of the potential maximum acuity of the monitoring device may be preferable, e.g. to reduce or eliminate spurious motion detections such as motion detections due to pets, flapping curtains, swinging doors and/or the like. This limitation of the potential maximum acuity of the monitoring device may be particularly beneficial in certain applications, such as but not limited to intruder or other threat detection, as a particularly serious action may be taken directly or indirectly based on the detection, such as the raising of an alarm, activating or otherwise operating one or more threat verification devices, and/or the like. As such, having an acceptably low rate of “false positives” in motion detection may be more important or beneficial than maximum acuity of motion detection in the armed state. However, in the unarmed state, the monitoring system may be used for a different purpose, which may be a purpose other than intruder or other threat detection. For example, the monitoring system may be used to monitor activities of an occupant of a premises, e.g. in order to detect a perilous situation in which the occupant may find themselves in. In such cases, a monitoring device that has a higher acuity in the unarmed state than for the armed state may be better able to perform the different functions in the armed and unarmed states.

The at least one of the parameters configured to provide a higher acuity may comprise, determine or affect one or more of: a gain, a threshold level, a detected frequency range, a number of threshold crossings, within a predefined time window, required to trigger a detection of motion, a frequency-gain profile, and/or transfer function.

The higher acuity may be or comprise detection of motion with a higher sensitivity. The motion detector may comprise a passive infra-red (PIR) motion detector. The at least one of the parameters configured to provide a higher acuity may comprise the threshold level, which may be or comprise a detection threshold of the PIR detector. The detection threshold may be a setting of the PIR detector. The detection threshold may be lower in the unarmed state than in the armed state. The motion detector may be configured to determine that motion has been detected when the output of the PIR motion detector, or a function of the output of the PIR motion detector, exceeds the detection threshold. Additionally or alternatively, the at least one of the parameters configured to provide a higher acuity may comprise the gain, which may be a gain of the PIR detector. The gain may be a setting of the PIR detector. The gain may be higher in the unarmed state than in the armed state. The gain may be a gain applied to an output of the PIR motion detector. That is, the higher acuity may be provided by one or both or both of the detection threshold of the PIR detector being lower in the unarmed state than in the armed state and/or the gain of the PIR detector being higher in the unarmed state than in the armed state.

The at least one of the parameters may comprise or affects a minimum angle across a field of view that is to be traversed by an object to cause a motion detection, wherein the minimum angle may be less in the unarmed state than in the armed state. The angle across a field of view traversed by an object may be a function of a number of times a threshold of a comparator stage of a motion detector has been crossed. The PIR motion detector may comprise a lens array, which may comprise a Fresnel lens, having a plurality of facets or lenslets, at least some of which may be distributed laterally across the field of view of the motion PIR motion detector. The PIR motion detector may comprise a pyroelectric sensor module having at least one pair of pyroelectric sensor elements. The pyroelectric sensor elements may be arranged in series but having opposing polarities (e.g. arranged back to back), where each of the elements may have a different perspective with respect to the field of view of the motion detector. The pyroelectric sensor module may be configured to output the electrical signal indicative of IR radiation received from the optical stage.

The at least one of the parameters configured to provide a higher acuity may comprise the number of threshold crossings within a defined or predefined time window that are required to trigger a detection of motion. Each crossing of the threshold may comprise an object being imaged to the pyroelectric module of the PIR motion detector by a different facet or lenslet of the lens array. The at least one parameter may comprise a minimum number of times a threshold of a comparator stage of the motion detector is crossed in order to detect motion, wherein the minimum number of times may be less in the unarmed state than in the armed state. The comparator stage may be configured to apply a positive threshold and a negative threshold. The at least one parameter may comprise a minimum number of times one of: the positive threshold, the negative threshold or both the positive and negative thresholds of the comparator stage of the motion detector are crossed in order to detect motion, wherein the minimum number of times may be less in the unarmed state than in the armed state.

The motion detector may be configured to enable detection of smaller movements in the unarmed state than can be detected in the armed state. The at least one of the parameters configured to provide a higher acuity may comprise, determine or affect the transfer function or gain vs frequency profile, e.g. the transfer function or gain vs frequency profile of the PIR detector. In the unarmed state the motion detector may more sensitive to detecting motion than in the armed state for at least one range of frequencies. For example, the gain stage of the PIR detector may be configured to apply different gains to different frequencies of the output of the pyroelectric sensor module of the PIR detector. Different frequencies of the output, i.e. in electrical output signals, of the pyroelectric sensor module may be indicative of different types of motion. Higher frequencies in the output of the pyroelectric sensor module may be indicative of faster movements and lower frequencies in the output of the pyroelectric sensor module may be indicative of slower movements. Thus, compared with a frequency range corresponding to expected or specified human motion, higher or lower frequencies may be indicative of noise, e.g. due to pet motion, or leaves or curtains swaying in wind, etc. Thus, the gain stage may provide a transfer function or gain vs frequency profile targeting the expected or specified human motion.

The pyroelectric sensor module may be configured to output a signal having a range of frequencies, that comprise a higher part of the range, that is higher in frequency than a lower part of the frequency range. The at least one of the parameters that provide a higher acuity in detecting motion in the unarmed state than in the armed state may comprise a higher gain or transfer function selectively for the higher part of the range when in the unarmed state than when in the armed state, e.g. for better detecting faster motions. The at least one of the parameters that provide a higher acuity in detecting motion in the unarmed state than in the armed state may comprise a higher gain or transfer function for the lower part of the range when in the unarmed state than when in the armed state, e.g. for better detecting slower motions.

The at least one of the parameters configured to provide a higher acuity may comprise the detected frequency range. The motion detector may be configured to be able to detect motion in the unarmed state in at least one range of frequencies for which the motion detector is unable to detect motion in the armed state. The motion detector may be configured to have a wider frequency range in the unarmed state than in the armed state. The motion detector may have a frequency response by which the motion detector can detect motion corresponding to an extended range of frequencies in the unarmed state compared to the armed state. In this manner some types of human motion may be detected in the unarmed state that may not be detected in the armed state. For example in the armed state, in order to reduce false alarms the range of frequencies for which motion is detectable may be more restricted, for example to capture a walking person, but not necessarily to capture more subtle motion. In the unarmed state, where it may be more acceptable for a detection to be due to noise, it may be advantageous to detect other motion, e.g. more subtle motion, such as lower frequency motion, and therefore to increase acuity to detecting motion at lower frequencies than can be achieved when in the armed state. It may additionally or alternatively be advantageous to increase acuity to detecting motion at higher frequencies in the unarmed state than can be achieved at in the armed state.

The transfer function or gain vs frequency profile of the PIR detector may be configured to preferentially take into account output of the pyroelectric sensor module of the PIR detector that is within a selected frequency band or window. For example, the transfer function or gain vs frequency profile of the PIR detector may be configured to apply a higher gain to output of the pyroelectric sensor module of the PIR detector that is within the selected frequency band or window and/or apply a lower gain to output of the pyroelectric sensor module of the PIR detector that is out with the selected frequency band or window. The transfer function or gain vs frequency profile of the PIR detector may apply a band pass filter, high pass filter or a low pass filter to the output of the pyroelectric sensor module of the PIR detector. The at least one of the parameters may comprise an upper and/or lower pass limit or an upper or lower limit of the selected frequency band or window. The upper pass limit and/or upper limit of the selected frequency window may be higher in the unarmed state than in the armed state. The lower pass limit and/or lower limit of the selected frequency window may be lower in the unarmed state than in the armed state. The selected frequency window may be wider in the unarmed state than in the armed state.

In this way, the one or more parameters may be configured so that the monitoring system can capture one or both of slower motions and/or faster motions in the unarmed state than when in the armed state.

The monitoring device may comprise a sound transducer for converting a sound wave to an electrical signal and a sound processing module for processing an output of the sound transducer. The monitoring device may be configured such that, when in the unarmed state, the monitoring device is configured to perform a process in which the sound processing module is triggered to operate in response to motion detected by the motion detector. The process performed by the monitoring device when in the unarmed state may further comprise performing the at least one action in response to the sound processing module determining that the audio trigger is represented in the output of the sound transducer.

The monitoring device may comprise at least one image capturing unit. The at least one action may comprises triggering the at least one image capturing unit to capture at least one image. The audio trigger may be an audio trigger indicative of peril, e.g. indicative of the authorised occupant being in a potentially perilous situation. The triggering of the sound processing module may comprise switching the sound processing module, and optionally also the sound transducer, from a state in which audio is at least one or each of: not collected, not recorded and/or not processed into a state in which audio is at least one or each of: collected, recorded and/or processed.

The triggering of the sound processing module may comprise switching the sound processing module from a lower power state into a higher power state. The lower power state may be an inactive state, sleep or off state. The higher power state may be an active, awake or on state.

The at least one image may comprise a still image or a video.

The triggering of the image capturing unit may comprise signalling the image capturing unit to capture the image and the image capturing unit capturing the image responsive to receiving the signal. The triggering of the image capturing unit may comprise switching the image capturing unit from a state in which images are at least one or each of: not collected, not recorded and/or not processed into a state in which images are at least one or each of: collected, recorded and/or processed.

The audio trigger may comprise or be representative of an indication of peril.

The sound processing module may be configured to identify any of a plurality of audio triggers . Each audio trigger of the plurality of audio triggers may be indicative of a different indication of peril. The sound processing module may be configured to determine that an audio trigger is present in the output of the sound transducer when the sound processing module determines that any of the plurality of audio triggers are present in the output of the sound transducer. By providing a plurality of different audio triggers, it may be easier for a user to provide a suitable indication of peril, particularly in stressful situations usually associated with peril.

At least one of the audio triggers may comprise a vocal indication of peril. At least one of the vocal indications of peril may be a verbal indication of peril, such as a keyword or key-phrase. Different indications of peril may comprise different keywords and/or key-phrases. The key word or key-phrase may comprise a word or phrase indicative of peril, such as at least one of: “help”, “emergency”, “intruder”, “burglar”, “injured”, “wounded” and/or the like.

At least one of the indications of peril may be a vocal but non-verbal indication of peril, such as at least one of: a scream, yell, shout, gasp, gurgling sound, sound of difficult breathing and/or the like. At least one of the audio triggers may comprise a non- vocal indication of peril, such as a sound of at least one of: a crash, a smashing sound, an explosion, a sound of grinding metal, an alarm sounding, and/or the like. Different indications of peril from the plurality of indications of peril may comprise at least one or each of: different words that are indicative of peril, different vocal sounds indicative of peril and/or different non-vocal sounds indicative of peril. By allowing non-verbal vocal sounds and/or other non-word noises indicative or peril, the monitoring device may better respond to the peril, particularly in situations of stress usually associated with peril where a user may be less able to remember or be able speak a required key-word or phrase.

The monitoring device may be configured to perform the process wherein the process comprises determining if motion is detected by the motion detector; triggering the sound processing module to operate in response to determining that motion has been detected by the motion detector; and triggering the image capturing unit to capture the at least one image in response to it being determined that the audio trigger is present in the output of the sound transducer. In this way, all processing required to trigger the image capturing unit according to the process may be performed locally by the monitoring device.

Some or all processing required to trigger the image capturing unit may be performed by a processing module of the monitoring device. The processing module may comprise or be implemented on one or more processing devices, which may comprise at least one processor, which may be a multi-core processor, ASIC, or FPGA and/or at least one electronic, logic or electrical circuit configured to perform processing, and/or the like, or any combination thereof. The processing module may be implemented by software running on the at least one processor. The processing module may be a unitary device. The processing module may be a distributed processing module. The processing module may comprise or be implemented on at least one processing device of one or more of: the motion detector, the sound processing module, the image capturing unit, and/or a control module of the monitoring device, or any combination thereof.

The processing module may comprise or be configured to access data storage. The processing module may be configured to determine if motion is detected by the motion detector.

The processing module may be configured to trigger the sound transducer in response to determining that motion has been detected by the motion detector. The processing module may be configured to determine that the audio trigger is present in the output of the sound transducer. The processing module may be configured to trigger the image capturing unit to capture the at least one image in response to it being determined that the audio trigger is present in the output of the sound transducer obtained responsive to the motion detected by the motion detector. All processing required to trigger the image capturing unit may be performed locally in the monitoring device, e.g. by the processing module. In this way, a system for identifying perilous situations can be provided but power consumption associated with communications may be reduced, thereby further saving power.

A processor of the motion detector may be configured to determine if motion is detected. The control module may be arranged to receive signals from the processor of the motion detector that motion has been detected. The control module may be configured to trigger the sound processing module to operate in response to motion detected by the motion detector. Alternatively, the processor of the motion detector may be configured to directly trigger the sound processing module to operate in response to motion detected by the motion detector. The control module may be arranged to receive a signal indicative of a determination that an audio trigger is represented in the output of the sound transducer. Alternatively, the sound processing module may be comprised in the control module. The control module may be configured to trigger the image capturing unit to capture at least one image in response to the audio trigger being represented in the output of the sound transducer. Alternatively, the sound processing module may be configured to directly trigger the image capturing unit to capture at least one image in response to the audio trigger being represented in the output of the sound transducer.

The monitoring device may be a unitary device in which the motion detector, the sound transducer, the sound processing module and the image collection unit are all incorporated into a single device. The motion detector, the sound transducer and the image capturing unit may all be physically mounted and/or connected together, e.g. by a housing and/or support of the monitoring device. The motion detector, the sound transducer and the image collection unit may all be arranged to at least operate on a common part of the premises, e.g. to monitor and/or collect the at least one images of the common part of the premises. By having the image collection unit in the same device as the sound transducer and the motion detector, the image collection unit may capture an image of the environment of the person that moved and expressed the audio trigger, thereby enabling another person (e.g. at a monitoring station) to identify the condition of, and/or situation involving, the person.

The monitoring device may be mountable to a surface such as a wall, ceiling, post or the like. The monitoring device may comprise at least one fixing arrangement, such as a screw hole, thinned housing portion for receiving a screw through it, an adhesive member and/or the like. The at least one fixing arrangement may be configured for mounting the monitoring device to the surface.

The monitoring device may be a monitoring device for a security system for detecting intruders at or adjacent to the premises. The security system and/or the monitoring device may be switchable between the armed state and the unarmed state responsive to a state selection made by a user. The monitoring device may be configured to receive an indication of state, e.g. of the armed state or the unarmed state. The indication of state may be received from a hub or control unit or directly from a user input device. In the unarmed state, the monitoring device may be configured to perform the said process. In the armed state, the monitoring device may operate at least one component that is not the motion detector to capture data from a verification field of view spanning the field of view of the motion detector to enable threat verification, said operating of the at least one component being directly responsive to the detection of motion by the motion detector. The monitoring device may be configured to, when in the armed state, raise an intruder alarm and/or take a deterrent or confirmation of intrusion action based on, or directly responsive to, the detection of motion by the motion detector.

The monitoring device may be configured to transmit a notification responsive to determining that the audio trigger is present in the output of the sound transducer. The notification may be a notification for at least one other image capturing unit to collect at least one image. The at least one other image capturing unit may be separate from the monitoring device. The at least one other image capturing unit may be remote from the monitoring device, but may be provided in the same premises as the monitoring device. The at least one other image capturing unit may be comprised in another, different monitoring device. The at least one other image capturing unit may be comprised in a common security system to which the monitoring device belongs. The monitoring device may be configured to transmit the notification wirelessly and/or over a local area network. The notification may notify the at least one other image capturing unit to automatically capture at least one image responsive to receiving the notification.

The monitoring device may be configured to transmit the notification indirectly to the at least one other image collecting unit or the device in which the at least one other image collecting unit is comprised. The monitoring device may be configured to transmit the notification to a hub, remote server or other remote device. The hub may be a local hub, which may be located in the premises. The remote server or other remote device may be located away from the premises. The hub, remote server or other remote device may be configured to notify or command the collection of the at least one image by the at least one other image collecting unit responsive to receiving the notification from the monitoring device.

Additionally or alternatively, the monitoring device may be configured to transmit the notification directly to the at least one other image collecting unit or the device in which the at least one other image collecting unit is comprised. The monitoring device may be configured to broadcast the notification and/or to address the notification to the at least one other image collecting unit or the devices in which the at least one other image collecting unit is comprised.

The monitoring device may be configured to apply thresholding to the triggering of the image capturing unit. The sound processing module may comprise a sound thresholding unit configured to make a determination of whether a strength parameter associated with the output of the sound transducer is greater than a threshold, which may be a predefined or set threshold. The sound processing module may be configured to determine whether an audio trigger is represented in an output of the sound transducer responsive to the sound thresholding unit determining that a strength parameter associated with the output of the sound transducer is greater than a threshold. The triggering of the sound processing module to operate in response to motion detected by the motion detector may comprise activating or enabling the sound thresholding unit.

At least the sound transducer may be comprised in a sound sensor. The sound sensor may further comprise the sound processing module.

The triggering of the sound processing module in response to motion detected by the motion detector may comprise switching the sound processing module from the lower power state into the higher power state for a limited or predetermined period of time before automatically returning the sound processing module into the lower power state in the absence of further motion detection during the limited or predetermined period. The limited or predetermined period may be a period in a range from 10 seconds to 30 minutes, e.g. 5 minutes.

The triggering of the sound processing module in response to motion detected by the motion detector may comprise activating the sound thresholding unit to operate for a first maximum duration of time in response to motion detected by the motion detector. The monitoring device may be configured such that, responsive to the sound thresholding unit determining that a strength parameter associated with the output of the sound transducer is greater than a threshold, the sound processing module is activated for a second maximum predetermined period of time, which may be less than the first maximum predetermined period of time. The sound thresholding unit may comprise one or more electrical components, which may comprise a comparator or a differential amplifier. The sound thresholding unit may be implemented in software, e.g. on a processor of the sound processing module. The sound thresholding unit may be implemented in hardware, e.g. as an ASIC, FPGA, or as an electronic, electrical or logic circuit.

The sound thresholding unit may be configured to determine if the sound detected by the sound transducer responsive to motion detected by the motion detector has an audio level or other strength-related characterisation that exceeds the threshold during the limited or predetermined period. The monitoring device may be configured to, responsive to determining that a strength parameter associated with the detected sound exceeds the threshold, switch the sound recognition unit into the higher power state to analyze the output from the sound transducer to determine if the audio trigger is represented. The monitoring device may be configured to switch the sound recognition unit into the higher power state for a period of time in response to determining that the detected sound has an audio level or other strength-related characterisation that exceeds the threshold, before returning the sound recognition unit to the lower power state. The period of time may be 60 seconds or less.

The image capturing unit may comprise a pixel array provided in an image sensor. The image capturing unit may comprise an image signal processor (ISP), which may perform image correction on raw data captured by the pixel array to provide image data. The image capturing unit may comprise a further processor for image processing (e.g. compression and optionally analytics) of the image data output from the ISP. The monitoring device may be configured such that the pixel array, and optionally the ISP, are powered upon determining that a strength associated with the output of the sound transducer is greater than a threshold, but only capture the one or more images if the audio trigger is determined to be present. The ISP and further processor may be provided by a common processing circuit and/or chip, for example they may be on the same silicon, or in another embodiment they may be on separate silicon, in which case the ISP may optionally be integrated into the image sensor.

The monitoring device may be configured such that, when the audio trigger is detected, a signal is provided to cause the pixel array, working with the ISP, to capture an image, which may be thereafter processed by the further processor. In some embodiments at least the pixel array and ISP may be powered or enabled in response to the detection of the audio trigger and may then straight away capture the one or more images. However, in other embodiments, the pixel array, and optionally the ISP, may be powered upon detection of the sound strength threshold being exceeded, but only capture the one or more images if the audio trigger is detected. The monitoring device may be configured such that, upon powering or enabling of the pixel array and ISP, the image capturing unit is stabilized and configured into a condition in which it is ready to capture images for processing by the further processor. During the stabilization process the image capturing unit may calibrate the exposure time to suit the lighting conditions. However, powering the pixel array consumes significant power, and thus in some embodiments the powering or enabling of at least the pixel array may not occur at least until the sound strength threshold is exceeded, or when the audio trigger is detected. The ISP may optionally also be powered responsive to the sound strength threshold is being exceeded to enable the stabilization to occur at that time so that if/when image capturing is needed the camera would have already stabilized. The further processor, may be powered or enabled at the same time as the ISP, or later if the ISP and pixel array are powered before there is a need to capture image, but in either case the further processor may be powered/enabled by the time any images are captured. According to a second example of the present disclosure is a system comprising the monitoring device of the first example and a controller, the controller having arming state awareness in which the controller is configured to know when the system is in the armed state or unarmed state.

The system may be a security system or another system that is configured or configurable to provide security functionality. The arming state may be an arming state of the system. The arming state may be associated with the security function of the system.

The controller may be comprised in a local hub, which may be located at the same premises as the monitoring device. The controller may be comprised in a remote system, e.g. a remote server, which may be located remotely from the premises.

At least part of the system, e.g. at least the monitoring device, may be configured to perform a function that comprises detecting motion in the environment in the armed state and may be configured to perform a different function that comprises detecting motion in the environment in the unarmed state. The function that comprises detecting motion in the environment in the armed state may comprise a security function, e.g. threat detection (e.g. intruder detection), and/or an intervention action (e.g. an intruder deterrent action). The different function that comprises detecting motion in the environment in the unarmed state may comprise detecting a state and/or activity of an occupant. When in the unarmed state, such an occupant would likely be a legitimate occupant. The different function that comprises detecting motion in the environment in the unarmed state may comprise detecting peril, e.g. when the occupant is in or at risk of being in a perilous situation.

The controller may be configured to: when the system is in the armed state, signal the monitoring device to operate at least one module that is not the motion detector to capture data from a verification field of view spanning the field of view of the motion detector to enable threat verification, said operating of the at least one component being directly responsive to the detection of motion by the motion detector. In some embodiments, the controller may be configured to: when the system is in the unarmed state, signal the monitoring device to perform at least one action responsive to detection of both motion by the motion detector when the monitoring device is associated with the unarmed state and the detection of a subsequent audio trigger.

The monitoring device may comprise a sound transducer for converting a sound wave to an electrical signal and a sound processing module for processing an output of the sound transducer. The monitoring device may be configured such that, when in the unarmed state, the monitoring device may be configured to perform a process in which: the sound processing module is triggered to operate in response to motion detected by the motion detector. The process performed in the unarmed state may comprise the at least one action being performed in response to the sound processing module determining that the audio trigger is represented in the output of the sound transducer. The audio trigger may be an audio trigger indicative of peril.

The security system may comprise at least one image capturing unit, and wherein the at least one action comprises triggering the at least one image capturing unit to capture at least one image. The at least one image capturing unit may be comprised in the monitoring device, e.g. in a unitary device. The system may comprise at least one other image capturing unit. The monitoring device may be configured to transmit a notification responsive to determining that the audio trigger is present in the output of the sound transducer, the notification being a notification for the at least one other image capturing unit to collect at least one image.

According to a third example of the present application is a method of operation of a monitoring device, the monitoring device comprising a motion detector for detecting a motion in an environment, the monitoring device being associated with an arming state, the arming state being associated with a security function, wherein the method comprises operating the monitoring device such that: in both an armed state and an unarmed state, the motion detector is configured to detect motion in the environment, wherein detection of motion is dependent on one or more parameters of the device. The method may comprise operating the monitoring device to configure the at least one of the parameters of the monitoring device to provide a higher acuity in detecting motion in an unarmed state than in an armed state.

The method may comprise selecting, setting or adjusting a value of at least one of the one or more parameters of the monitoring device depending on the arming state associated with at least the motion detector, so that the at least one of the parameters of the monitoring device provide a higher acuity in detecting motion in an unarmed state than in an armed state. For example, the method may comprise determining the arming state or a change in arming state associated with at least the motion detector, determining the value of the at least one of the parameters of the monitoring device associated with the arming state and setting the at least one of the parameters of the monitoring device to the determined value.

The monitoring device may be part of a system, such as a security system or another system that implements security functionality, for example. The arming state may be an arming state of the system. The arming state may be associated with the security function of the system. The system may be a system according to the second example.

The monitoring device may be a monitoring device of the first example.

A computer program embodied on a non-transient, tangible computer readable carrier medium, the computer being configured such that when implemented by a processor of a monitoring device that comprises a motion detector for detecting a motion in an environment, the monitoring device being associated with an arming state, the arming state being associated with a security function, causes the processor to cause the monitoring device to operate such that: in both an armed state and an unarmed state, the motion detector is configured to detect motion in the environment, wherein detection of motion is dependent on one or more parameters of the device; and configure at least one of the parameters to provide a higher acuity in detecting motion in an unarmed state than in an armed state.

The monitoring device may be part of a system, such as a security system or another system that implements security functionality, for example. The arming state may be an arming state of the system. The arming state may be associated with the security function of the system.

The computer program may be configured to implement the method of the second aspect.

The individual features and/or combinations of features defined above in accordance with any aspect of the present disclosure or below in relation to any specific embodiment of the disclosure may be utilised, either separately and individually, alone or in combination with any other defined feature, in any other aspect or embodiment of the disclosure.

Furthermore, the present disclosure is intended to cover apparatus configured to perform any feature described herein in relation to a method and/or a method of using or producing, using or manufacturing any apparatus feature described herein.

BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS

For a better understanding of the present disclosure and to show how embodiments may be put into effect, reference is made to the accompanying drawings in which:

Figure 1 shows a prior art monitoring system at a premises;

Figure 2 shows an example of a monitoring system at a premises, the monitoring system being in an unarmed state;

Figure 3 is a schematic of an exemplary arrangement of a monitoring device that is usable with a monitoring system, such as that of Figure 2;

Figure 4 shows an example of the monitoring system of Figure 2 in an armed state;

Figure 5 shows an example of thee monitoring system of Figure 2 in an unarmed state;

Figure 6 is a schematic of an example of an image capturing unit that could be used in the monitoring devices of Figure 3

Figure 7 is a schematic of a motion detector that could be used in the monitoring devices of Figure 3; Figure 8 is an illustration of a motion detector that could be used in the monitoring devices of Figure 3;

Figure 9 is an example of an output signal of the motion detector of Figure 8;

Figure 10 is a flowchart of a method of operation of a monitoring device, such as those of Figure 3; and

Figure 11 is a flowchart of a method of operation of the monitoring device when in the unarmed state.

DETAILED DESCRIPTION

In the following detailed description, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration specific embodiments in which the inventive subject matter may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice them, and it is to be understood that other embodiments may be utilized, and that structural, logical, and electrical changes may be made without departing from the scope of the inventive subject matter. Such embodiments of the inventive subject matter may be referred to, individually and/or collectively, herein by the term “invention” merely for convenience and without intending to voluntarily limit the scope of this application to any single invention or inventive concept if more than one is in fact disclosed.

The following description is, therefore, not to be taken in a limited sense, and the scope of the inventive subject matter is defined by the appended claims and their equivalents.

In the following embodiments, like components are labelled with like reference numerals.

In the following embodiments, the term data store or memory is intended to encompass any computer readable storage medium and/or device (or collection of data storage mediums and/or devices). Examples of data stores include, but are not limited to, optical disks (e.g., CD- ROM, DVD-ROM, etc.), magnetic disks (e.g., hard disks, floppy disks, etc.), memory circuits (e.g., EEPROM, solid state drives, random-access memory (RAM), etc.), and/or the like.

As used herein, except wherein the context requires otherwise, the terms “comprises”, “includes”, “has” and grammatical variants of these terms, are not intended to be exhaustive. They are intended to allow for the possibility of further additives, components, integers or steps.

The functions or algorithms described herein are implemented in hardware, software or a combination of software and hardware in one or more embodiments. The software comprises computer executable instructions stored on computer readable carrier media such as memory or other type of storage devices. Further, described functions may correspond to modules, which may be software, hardware, firmware, or any combination thereof. Multiple functions are performed in one or more modules as desired, and the embodiments described are merely examples. The software may be executed on a digital signal processor, ASIC, FPGA, microprocessor, microcontroller or other type of processing device or combination thereof.

Furthermore, references are made herein to a processing module. The processing module may comprise software and/or be at least partly implemented using software. The processing module may comprise, and/or be at least partially implemented using, hardware for processing, which may comprise one or more devices, a system of devices, which may comprise a plurality of processing devices that may be distributed or localized, may comprise different processing devices in different apparatus or may all be contained in a single apparatus. The processing devices may comprise one or more processors, which may be multi-core or single core processors, ASICs, one or more hardware programmable devices (e.g. FPGAs), electrical, electronic or other logic circuits, and/or the like, or any combination thereof.

Specific embodiments will now be described with reference to the drawings.

Figure 1 is a schematic showing an example of a prior art monitoring system 5 arranged in, and configured to monitor, a premises 10. The premises 10 may be or comprise any suitable location such as a house, apartment, office, factory or other building or part thereof, an area of land such as a garden, yard, goods area, and/or the like. However, in the examples herein the premises 10 comprises a building or a part thereof. The premises 10 are generally localized and coverable by a local area network and/or short range communications such as Wi-Fi and/or the like. The monitoring system 5 comprises several separate components. In this example, the monitoring system 5 comprises a number of components including a monitoring device 15, an image collection device in the form of a camera 20, a motion detector 25 and a combined motion detector and camera 30, but the components may comprise different or further features and are not limited to these examples. Different components, different numbers of components and/or different arrangements of components are possible. Each of the above components of the monitoring system are configured to wirelessly communicate with a local control hub 35.

Of course, the monitoring system 5 of Figure 1 is exemplary and a variety of other configurations of monitoring system are known in the art. For example, although the components of the monitoring system in the example of Figure 1 communicate with each other via the hub 35, e.g. in a star topology, any other arrangement in which components communicate with each other could be used, e.g. in a mesh topology, or via a remote server 37 such as by using a direct IP connection (e.g. a cellular connection) or an indirect IP connection (e.g. via any generic WiFi hub). Also, different numbers and types of components could be used. Any processing required to operate the monitoring system 5 may be carried out by the control hub 35, be distributed over the control hub 35 and one or more of the components 15 to 30, or provided on one or more or each of the components 15-30.

In this particular example, the monitoring device 15 comprises a motion detector for detecting motion of an entity. The motion detector advantageously may comprise a passive infrared (PIR) motion detector of the type well known in the art. In examples, the monitoring device 15 also comprises a camera or other image capturing device.

The monitoring system 5 is configurable using a user selectable arming state to be in an armed state, an unarmed state or partially armed state whereby only a portion of the premises 10 is armed (typically excluding any bedrooms). In the description herein, when referring to the monitoring system 5 as being armed or disarmed it is in reference to at least a part of the premises (e.g. a room of a premises) that includes the monitoring device 15, thus not precluding that when the system is described as armed it may mean only partially armed, so long as the part of the premises having the monitoring device 15 is armed. Similarly, where describing the system as being unarmed it is in respect to at least a part of the system that comprises the monitoring device 15 and does not preclude the possibility that another part of the premises may be armed.

When the monitoring system 5 is in the armed state, the expectation is that there will be no entity of interest, typically a person, present at the premises 10 (or at least in the part of the premises associated with an armed state of the monitoring system 5). When the monitoring system 5 is in the armed state, motion that is indicative of an entity of interest is determined responsive to motion that meets one or more criteria is detected by the motion detector of the monitoring device 15. Responsive to such detection of motion, a verification action could be taken in order to verify the presence of an entity of interest that lead to the detection of motion. For example, the image collection device of the monitoring device can be triggered to collect an image as a verification action. Optionally, in response to the detection of motion, there may additionally be a triggering of one or more of the camera 20 and the camera of the combined motion detector and camera 30, which may be triggered by a message received via the control hub 35, but alternatively via the remote server 37, or in other embodiments may be triggered via a message received directly monitoring device 15 or via a mesh network. Additionally or alternatively, an alarm may be raised in response to the detected motion.

In the example of Figure 1, the monitoring system 5 is configured as a security system that monitors for intruders whilst the system is armed, usually whilst the occupants or residents are away from the premises 10.

Figure 2 shows a system 100 that is configured to provide functionality when a user 110 (i.e. an occupant or resident) is at the premises 10. In examples, the system 100 is configured to monitor for indications of peril at the premises 10, particularly indications provided by the user 110, but the functionality provided when the user 110 is at the premises 10 is not limited to this and different or additional functionality can be provided. In examples, the system 100 is additionally or alternatively operable to provide security functionality, which may be as part of a security system or another system that provides security functionality among other functions, in addition to the functionality provided when the user 110 is at the premises 10. The security functionality could comprise, for example, threat detection (e.g. intruder detection), and/or an intervention action (e.g. an intruder deterrent action). For example, the system 100 is operable to provide the security functionality when in the armed state. The functionality provided when the user 110 is at the premises 10, e.g. monitoring for indications of peril, may be made when the system 100 is in an unarmed state, and in some embodiments not when the system 100 is in an armed state.

The system 100 of Figure 2 comprises a control hub 102 and a monitoring device 115. The system 100 also comprises a camera 20, motion detector 25 and combined motion detector and camera 30, similar to, or the same as, those in the system of Figure 1. At least the monitoring device 115, optionally together with the control hub 102 or the remote server 37, is adapted to provide functionality leading to the capture of an image, such as at least one of: motion detection, triggering of components and/or audio processing to determine audio triggers. For example, whilst the monitoring device 115 is preferably self-contained, performing all processing required to trigger image capturing in response detecting motion, as an optional alternative the control hub 102 or the remote server 37 can contribute to some of the functions, e.g. by providing a library of reference audio fingerprints for use in sound recognition, or by relaying notifications to other devices to take actions and/or the like.

Figure 3 shows a schematic of an exemplary monitoring device 115 that could be used in the system 100 of Figure 2. The monitoring device 115 comprises a motion detector 120, preferably a passive motion detector such as a passive infra-red (PIR) motion detector. PIR motion detectors are known in the art. The motion detector 120 comprises imaging optics 125, which may comprise an array of lenses (optionally Fresnel lenses), for directing infra-red radiation incident on the imaging optics 125 to an infra-red (IR) sensor 130, comprised of a pyroelectric sensor module having at least one pair of pyroelectric elements, for transducing infra-red radiation into electrical signals. The motion detector 120 is described in further detail in relation to Figures 7, 8 and 9. The electrical signals from the IR sensor 130 are processed by motion detection module 135 that applies one or more criteria, using electronics and/or software, to determine whether the electrical signals form the IR sensor 130 are indicative of motion. The motion detection module 135 could be implemented at least in part using an electronic circuit(s) and/or by a processors), or the like. The motion detection module 135 may be configured to generate an output indicating that motion has been detected if the output from the infra-red sensor 130 meets the criteria for motion detection.

The monitoring device 115 in this example also comprises a microphone that includes a sound transducer 140 configured to generate an electric signal indicative of received sound waves. The electric signal from the sound transducer 140 is provided to a sound processing module 145 that is configured to perform processing of the sound represented in the signal. However, the microphone and sound processing module not essential and in other examples, the monitoring device 115 does not comprise a microphone or sound processing module.

The monitoring device 115 is arranged such that the sound processing module 145 is operative responsive to motion being detected by the motion detection module 135. In some embodiments the sound transducer 140 is always operative. In other embodiments, the sound transducer 140 is only operative when the sound processing module 145 is operative.

In the particular example shown in Figure 3, the sound processing module 145 comprises a thresholder 150 that is enabled by the signal output from the motion detection module 135 that is indicative of motion being detected (i.e. indicative of the signal received from the infra-red sensor 130 meeting the criteria for motion detection). In this example, the thresholder 150 is in the form of a comparator having, as comparison inputs, a threshold referencing input 155 and the sound measurement input that is or represents the output of the sound transducer 140. The comparator also has the output of the motion detection module 135 as an enable input. For simplicity of illustration, the thresholder 150 is depicted in the figures as being provided by analog electronics, though it may alternatively be implemented digitally or in software. Where implemented digitally or in software the sound processing module 145 includes an analog to digital converter (not shown) between the output of the sound transducer 140 and sound measurement input of the comparator. With this in mind, it will be appreciated that any or all parts of the microphone other than the sound transducer 140 may be provided by the sound processing module 145.

The output 160 of the thresholder 150 is indicative of a strength parameter associated with the output of the sound transducer 140 being greater than the threshold referencing input 155. For the strength parameter, the thresholder 150 may use, for example, an amplitude of the output of the sound transducer, for example to represent a signal power or a sound intensity, or a measurement based on an integration of the output, for example to represent an energy associated with an incident sound wave within a predefined time window, or another parameter of the output of the sound transducer 140.

The output 160 of the thresholder 150 may be provided to a sound recognition unit 162. In this way, operation of the sound recognition unit 162 to process the output of the sound transducer 140 may be triggered responsive to the output of the thresholder 150 being indicative of (1) the output of the motion detection module 135 indicating that motion has been detected and (2) the sound measurement input received from the sound transducer 140 being greater than the threshold referencing input 155.

However, in other embodiments, the thresholder 150 is omitted, whereby the sound recognition unit 162 is triggered responsive to detection of motion by the motion detector 120 without requiring an output of the sound transducer 140 to pass a threshold.

The sound recognition unit 162 can generate audio signatures or fingerprints extracted from the output of the sound transducer 140 and compare these to reference audio signatures or fingerprints in a datastore 180 that are indicative of audio triggers to determine if the audio triggers are represented in the output of the audio transducer 140. In this example, the monitoring device 115 comprises an image capturing unit 165. Responsive to determining that one or more of the audio triggers are represented in the output of the audio transducer 140, the sound processing module 145, or an intermediating processing module 172 in communication with the sound processing module 140, can trigger the image capturing unit 165 to collect an image. The intermediating module 172 may provide the sound processing module 145 with access the datastore 180, e.g. as depicted in figure 3, or the sound processing module 145 may have direct access to the datastore 180. Though not shown, it will be appreciated that such an intermediating processing module 172 may additionally or alternatively interface between the motion detection module 120 and the sound processing module 145.

The image capturing unit 165 comprises optics 173, a pixel array 174 and other camera electronics 176. The optics 173 comprises one or more lenses and/or other optical components required to collect optical radiation and provide it to the pixel array 174. The pixel array 174 comprises an array of optical transducers, and may comprise, for example, a charge coupled device (CCD) or a complementary metal oxide semiconductor (CMOS) pixel array. The other camera electronics 176 are configured to condition and process the output of the pixel array. The components of the image capturing unit 165 are described in further detail in relation to Figure 5.

Any or all of: the motion detection module 135, the sound processing module 145 (or any of its constituent components such as the sound recognition unit 162 and/or the thresholder 150), the intermediating processing module 172 and/or the other camera electronics 176 may be provided as discrete components. However, Figure 3 shows an option in which a processing module 175 comprises at least one processing device such that the processing module 175 may be configured to provide any or all of the functionality described above in relation to one or more or each of: the motion detection module 135, the sound processing module 145 (or any of its constituent components such as the sound recognition unit 162 and/or the thresholder 150), the intermediating processing module 172 and/or the other camera electronics 176. For example, the intermediating processing module 175 could comprise one or more processors programmed with suitable software that causes the processors to provide the functionality described above in relation to one or more or each of: the motion detection module 135, the sound processing module 145, the thresholder 150 and/or the intermediating processing module 172. That is, one or more or each of the motion detection module 135, the sound recognition unit 162, the thresholder 150 and/or the intermediating processing module 172 could be implemented as software modules implemented on the one or more processors of the processing module 175. The intermediating processing module 172 and/or the processing module 175 can be implemented using software and/or hardware and could optionally comprise or be implemented on one or more processors, ASICs, FPGAs or the like, or by a logic circuit, an electronics circuit, an electrical circuit, or the like. That is, a skilled engineer could envisage several ways of implementing the above functionality, including distributed and/or local processing, based on the present teaching.

Upon being triggered by the detection of motion by the motion detector 120 (and optionally also responsive to the sound measurement input received from the sound transducer 140 exceeding the threshold referencing input 155) the sound recognition unit 162 is configured to process the output of the sound transducer 140, at least to determine if there is an audio trigger represented in the output of the sound transducer 140. This triggering of the sound processing module 145 may involve switching the sound recognition unit 162 into a higher power mode from a lower power or off mode, e.g. in which it may have been residing in order to conserve power, but this need not necessarily be the case.

The audio trigger preferably comprises or is indicative of an indication of peril such that the system 100 is operable to perform an action responsive to detecting an indication of peril. Beneficially, the sound processing module is configured to determine if any of a plurality of different audio triggers (e.g. different indications of peril) are represented in the output of the sound transducer 140. At least one of the audio triggers may be a vocal audio trigger, which may comprise a word, phrase or other verbal audio trigger such as “help” or “emergency” or “I’m in danger”, or may comprise a non-verbal vocal audio trigger such as a scream, cry or shriek. At least one of the audio triggers may comprise a non-vocal audio trigger such as the sound of a crash, breaking glass, an explosion, a bang and/or the like. Since the sound processing module can detect any of a range of audio triggers, it may make it easier for users 110 to get the system 100 to react, particularly during perilous situations where the user 110 may be under a high degree of stress and may not quickly remember a single specific audio trigger. Similarly, by being responsive to vocal but non-verbal and/or non-vocal audio triggers, it may be easier to get the system 100 to respond to indications of peril.

Responsive to the sound processing module 140 determining that any of the audio triggers are represented in the output of the sound transducer 140, the image capturing unit 165 (such as a camera) of the monitoring device 115 is triggered to capture at least one images, such as a still image or video. In this way, the monitoring device 115 may capture an image of whatever it is that is causing the perilous situation.

The monitoring device 115 also comprises a communications unit 170 for communicating with other components of the monitoring system 100, such as at least one of: the control hub 102 or remote server 37, the camera 20, the combined motion detector and camera 30 and/or the like. Responsive to determining that any of the audio triggers are represented in the output of the sound transducer 140 (where such determining is only performed after suitable motion has been detected), the sound processing module 140 sends a communication to at least one other component of the system 100 using the communications unit 170. For example, the communication may be a request for any camera, such as the camera 20 or the camera in the combined motion detector and camera 30, to also collect one or more images. In this way, even if the cause of the perilous situation is in a different room, or simply not visible to the image capturing unit 165, then images of the case of the perilous situation may still be taken. Yet by, in any case, using the device 115 to capture one or images, the system 100 may provide information about a scene in which the person who provided the vocal indication of peril is likely to be situated, and therefore also about the condition of that person.

This can be described with reference to Figure 2. For example, the user 110 (resident) may see a first threat 171 and so jumps up startled, lets out a scream and calls out “help, intruder!”. The motion detector 120 of the monitoring device 115 detects the motion associated with the user 110 jumping up and sends an enabling signal to the thresholder 150 so that the thresholder 150 (and optionally the sound transducer 140 if the sound transducer 140 is not always enabled) are enabled into an operational condition. A strength parameter, in the form of the sound strength collected by the sound transducer 140, including the sound strength of the sound due to the scream and call of the user 110, is above the threshold referencing input 155. As such, the thresholder 150 returns the enable signal 160 that triggers operation of the sound recognition unit 162 to process the sound received by the sound transducer 140, including the scream and shout of the user 110. Responsive to the enable signal 160 (which is provided responsive to both the motion being detected and the sound being above a threshold level), the sound recognition unit 162 processes the signal from the sound transducer 140 and determines that the scream of the user 110 and the words “help” and “intruder” are represented in the signal from the sound transducer 140. The sound processing module 145 (or alternatively the processing module 172 or 175) is configured to determine that these are all pre-set audio triggers indicative of peril, e.g. by matching audio signatures or fingerprints to reference audio signatures or fingerprints stored by the data store 180. Responsive to determining that one or more of the pre-set audio triggers are represented in the output from the sound transducer 140, the image capturing unit 165 of the monitoring device 115 is triggered to capture one or more images. The first threat 171 originally seen by the user 110 is in the same room as the user 110 and within the field of view of the image capturing unit 165. As such the first threat 171 is captured in the images collected by the image capturing unit 165.

However, in the scenario of Figure 2, there are additional or alternative threats 173, 174 that are outside of the field of view of the image capturing unit 165. Responsive to at least the determination that both motion has been detected and at least one audio trigger indicative of peril has been received, the monitoring device 115 automatically transmits or broadcasts a notification that a perilous situation has been determined. The notification may trigger, either directly or indirectly, any receiving devices with a camera to collect at least one image. In the example of Figure 2, the camera 20 and combined motion detector and camera 30 both operate to collect images due to the notification. In this case, the images collected by the camera 20 capture one of the additional or alternative threats 173 and the images collected by the camera of the combined motion detector and camera 30 capture the other additional threat 174. As such, even in the case where the user 110 (resident) is in one room and the threat 174 is not in the room and not visible to the user 110, the threat can still be captured by the system 100.

Variations on the above arrangements are possible. For example, Figure 4 shows the system 100 of Figure 2 configured as a security system that is switchable between an armed state and an unarmed state, with Figure 4 illustrating the operation of the system 100 in the armed state and Figure 2 illustrating the operation of the system 100 in the unarmed state.

In the armed state, as illustrated in Figure 4, at least part of the premises 10 is unoccupied by the user 110 (e.g. the resident of the premises) and the monitoring device 115 monitors for motion at least at that part of the premises 10. When an intruder 505 enters that part of the premises 10, the motion of the intruder 505 is detected by the motion detector 120 of the monitoring device 115 and, when the system 100 is in the armed state, the monitoring device 115 is configured to automatically collect at least one image using the image capturing unit 165 directly responsive to the detection of motion meeting the criteria by the motion detector 120 (i.e. without also requiring the present of an audio trigger). Further, the monitoring device is also configured to, directly responsive to the detection of motion meeting the criteria by the motion detector 120, send a notification that causes at least one component of the system 100 that is not the motion detector 120 (or the sound transducer 140) to capture data from a verification field of view spanning the field of view of the motion detector 120 to enable threat verification. Because the system 100 is armed, the at least one component may be directly responsive (even if the at least one component receives the command to capture images from an intermediate device such as the control hub 102 or server 37 in response to the control hub 102 or server 37 receiving the notification) to the detection of motion by the motion detector 120 of the monitoring device 115. In other words, motion detection alone is sufficient to trigger the at least one component to capture an image or images, when the system 100 is armed. In the armed state, the system 100 may additionally or alternatively be configured to raise an alarm, which is optionally raised either directly on detection of motion without any audio trigger being detected or at least once the presence of the intruder has been confirmed by the at least one component of the system 100 that is not the motion detector 120 that is operated responsive to detection of motion.

In contrast, when the system 100 is in the unarmed state, the monitoring device 115 is configured to perform the process described above in relation to Figures 2 and 3, i.e. to trigger the image capturing unit 165 and optionally send the notification responsive to determining that an audio trigger is present in the output of the sound transducer 140, which is performed by the sound processing module 145 (and specifically by the sound recognition unit 162 thereof) responsive to detection of motion by the motion detector 120. In the unarmed state, whilst an alarm or alert may optionally ultimately be raised as an additional action, this is responsive to a multi-stage process that comprises at least detection of an audio trigger by the sound recognition unit 162 of the sound processing module 145 that is triggered responsive to a detection of motion meeting the criteria by the motion detector 120.

In this way, the system 100 can provide security functionality, e.g. to act as a security system, and quickly perform a security action, such as one or more of: verifying presence of an intruder (i.e. threat verification), taking a deterrent action and/or raising an alarm, when the system 100 is in an armed state set by the user 110 when the user is not intending to be present in at least the part of the premises 10. However, when the system 100 is in the unarmed state, e.g. which may be selected if the user 110 is present in at least the part of the premises 10, the system 100 can still detect and action an indication of peril or perform other functionality such as monitoring whilst the user 110 is present.

However, the concepts described herein are not limited to use in security systems. For example, the audio trigger may be an audio trigger other than an indication of peril. Further, the system 100 may be configured to perform some other action responsive to the audio trigger, such as monitoring and recording data relating to the user 110 (e.g. to determine their behaviour or condition), or the like.

. Additional or alternative actions may be performed by monitoring device 115, or by another component of the system 100, responsive to the sound processing module 145 determining that the audio trigger is present after having been triggered by motion being detected by the motion detector 120. The additional or alternative action could comprise, for example, one or more of: raising an alarm or alert, notifying a prescribed contact and/or emergency service, causing a deterrent device to take a deterrent action, causing another verification device to take a verification action, logging the motion and audio trigger to a data store, collecting data on the activities of the user, and/or the like. The verification action taken responsive to the sound processing module 145 determining that the audio trigger is present after having been triggered by motion being detected by the motion detector 120 may be a peril verification action verifying that the indication of peril is a real peril, rather than simply the motion based indication of peril. A “verification action” as used herein may comprise, for example, capturing one or more images, or capturing radar, sonar or LIDAR data which may be three-dimensional data, etc. A “deterrent action” may, for example, comprise emitting of matter that may deter an intruder, e.g. smoke or fog and/or a chemical irritant, etc.

The additional or alternative action could be taken by the monitoring device 115 itself and/or the monitoring device 115 could be configured to automatically communicate a notification signal indicating that both motion and an audio trigger have been detected via the communications unit to another component of the system 100 to cause the alternative or additional action to be performed.

Figure 5 shows an example of a system 200, that is substantially similar to that described in relation to Figures 2 and 4 with corresponding components incremented by 200, other than a monitoring device 215 of the system 200 is configured to take an action other than operating an image capturing unit of the monitoring device 215 to collect an image.

Similarly to the monitoring device 115 described in relation to Figures 2 and 3, the monitoring device 215 of Figure 5 comprises a motion detector 120 for detecting motion. The monitoring device 215 also comprises a sound transducer 140 operable to capture sound and a sound processing module 145 that is operable responsive to the motion detector 120 determining that motion has been detected. Similarly to the monitoring deice 115 described in relation to Figures 2 and 3, the sound processing module 145 of the monitoring device 215 of Figure 5 comprises a sound recognition unit 162 configured to identify indications of peril in the output of the sound transducer 140. However, responsive to indications of peril being identified, rather than controlling an image capturing unit of the monitoring device to collect an image, a different action is taken. In examples, the different action can comprise switching the arming state of the system 200 from an unarmed into an armed state, and/or communicate to other devices 20, 25, 30 to take an action, and/or the like.

In the example of Figure 5, the system 200 is entirely unarmed and a user 110 (resident) is in a room of the premises 10 (which is a house in this example). A threat 510, such as an intruder, a bang or the like, in a different room of the premises 10 is heard by the user 110. In response to hearing the threat 710, the user 110 moves, e.g. as a result of being surprised to hear the threat 510. This movement of the user 110 is detected by the motion detector 120 of the monitoring device 215, which in turn activates the sound processing module 145 of the monitoring device 215 to start processing audio received from the sound transducer 140 of the monitoring device 215. Once the user 110 realises that the noise they heard is the threat 510, then cry out “help”. The cry of “help” is received by the sound transducer 140 of the monitoring device 215 and processed by the sound processing module 145 and specifically by the sound recognition unit 162 of the monitoring device 215. The sound recognition unit 162 of the monitoring device 215 determines that the cry of “help” is indicative of peril by matching a sound signature or fingerprint representing the cry of help to a reference sound signature or fingerprint indicative of peril. Responsive to determining that there is an indication of peril, the monitoring device 215 takes at least one action. For example, the at least one action taken could be to arm the system 200. In the armed state, the system 200 may be configured to raise an alarm directly on detection of any further motion. Additionally or alternatively, the at least one action taken may comprise triggering any camera devices comprised within the system 200, such as at least one of: a camera in the monitoring device 215, the camera 20 and/or the camera in the combined camera and motion detector 30, to collect an image. Additionally or alternatively, the at least one action taken may comprise raising an alarm.

In any of the examples described herein, the triggering of the image capturing unit 165 could involve providing an enabling signal or command signal to collect an image independently of any power control, or it could involve switching the image collecting unit 165 from an off, lower power, sleep or partly functional mode into a higher power or fully functional mode. Beneficially, the monitoring device 115 shown in Figure 3 can be comprised in a single device, e.g. with the components held together by and/or inside a housing. The monitoring devices 115 are optionally battery powered, such that the operations described herein, which are designed to conserve power, may be particularly beneficial.

In some examples, responsive to motion detection by the motion detector 120, one or both of the sound transducer 140 and/or the thresholder 150 to be switched into an operational state (optionally from an off, lower power or non-operational state) and optionally remain in the operational state for, or for up to, a predetermined amount of time since a most recently detected motion before returning to a lower powered or otherwise non-operational state. The predetermined amount of time may, for example, be a time in a range from 10 seconds to 30 minutes; preferably at least 2 minutes, and/or preferably not more than 10 minutes, e.g. 5 minutes. During the predetermined amount of time, if a sound strength is detected that exceeds the threshold referencing input 155, then the sound recognition unit 162 is enabled to analyze sound data derived from the sound transducer 140 to look for any of the audio triggers, e.g. an indication of peril.

In response to a determination that a sound strength exceeds the threshold referencing input, the sound recognition unit 162 may be made operational for, or for up to, a predetermined amount of time. This predetermined time relating to the operation of the sound recognition unit may be, for example, a time that is less than 60 seconds and/or more than 10 seconds, e.g. 30 seconds.

Figure 6 is a schematic of an example of an image capturing unit 165 that could be used in the system of Figure 3. In this case, the image capturing unit 165 comprises optics 605 (equivalent to optics 173 in Figure 3) to collect radiation incident thereon and direct the radiation to a pixel array 610 (equivalent to pixel array 174 of Figure 3) that comprises an array of transducers for converting received radiation into electrical signals. The pixel array 610 may for example be a charge coupled device (CCD) or complementary metal oxide semiconductor (CMOS) array, as are known in the art. The output of the pixel array 610 is received by intermediate electronics 615 that are intermediate the pixel array 610 and an image signal processor, ISP, 620. The intermediate electronics 615 may be configured to condition the raw outputs of the pixel array 610 and may comprise, for example, analogue to digital converters (ADCs) for converting the electrical signals from the pixel array into digital signals. The ISP 620 is configured to process the digital images, e.g. to apply image correction to the raw image data. The image capturing unit 165 optionally comprises a further processor 625, which may act as a compressor for compressing and packaging the image corrected data output by the ISP 620. In different examples, one or more or each of the components 610, 615, 620, 625 can be provided as discrete components or one or more or each of the components could be combined together, e.g. on the same silicon substrate. In one example the ISP 620 is integrated with the intermediate electronics 615 and the pixel array 610 in a common image sensor. Furthermore, one or more or all of the intermediate electronics 615, the ISP 620 and/or the further processor 165 may be equivalent to the other camera electronics 176 in Figure 3.

Beneficially, the image capturing device 165 may be triggered in stages. When an audio trigger (e.g. indicative of an indication of peril) is detected by the sound processing module 145 (more particularly the sound recognition unit 162 in the sound processing module 145), a signal is provided by the sound processing module 145 (either indirectly, e.g. via the processing module 172, or directly) to the image capturing unit 165 that causes the pixel array 610, working with the ISP 620, to capture an image, which is thereafter processed by the further processor 625. In some examples, at least the pixel array 610 and the ISP 620 are powered or enabled by an enable signal EN3 in response to the detection of the audio trigger (e.g. indication of peril) and then straight away capture the one or more images. The enable signal EN3 may be provided by one of: the sound processing module 145 or the processing module 172 or 175, depending on which processing arrangement is used. However, in another embodiment, the pixel array 610 (and optionally the ISP 620) are powered upon the threshold referencing input 155 being exceeded (crossed) by the sound strength from the sound transducer 140, but only capture the one or more images if the audio trigger is detected by the sound processing module 145. Upon powering/enabling of the pixel array 610 and ISP 620, the image capturing unit 165 stabilizes and gets configured into a condition in which it is ready to capture images for processing by the further processor 625. During the stabilization process, the image capturing unit 165, for example, calibrates the exposure time to suit the lighting conditions. However, powering the pixel array 610 consumes significant power, and thus in some embodiments the powering/enabling of at least the pixel array 610 does not occur at least until the sound strength threshold referencing input 155 is passed, if not until when the audio trigger is detected. The ISP 620 may optionally also be powered at that time to enable the stabilization to occur at that time so that if/when image capturing is needed the image capturing unit 165 would have already stabilized. The further processor 625 could be powered and enabled at the same time as the ISP 620, or later if the ISP 620 and pixel array 610 are powered before there is a need to capture an image, but in either case the further processor 625 is powered/enabled by the time any images are captured after the stabilization.

Exemplary functional stages of the PIR motion detector 120 are shown in Figure 7. These are provided by way of example, and it will be appreciated that other arrangements of PIR detector could be used. In this example, the PIR motion detector 120 comprises an optical stage 705 (equivalent to imaging optics 125 in Figure 3) for providing an optical signal indicative of received IR radiation, a transducer stage 710 (equivalent to the infra-red sensor 130 in Figure 3) for transducing the optical signal provided by the optical stage 705 into an electrical signal, and a processing stage 715 (equivalent to the motion detection module 135 in Figure 3) for processing the electrical signal to determine whether or not the signal is indicative of motion (e.g. of human motion).

In examples, the optical stage 705 comprises a lens array formed from material that is transparent to IR radiation, wherein the lens array comprises an array of lenses, which could optionally be Fresnel lenses. Figure 8 shows a front view of the PIR motion detector 120 showing the array of lenses 805. The array of lenses 805 may be in the shape of a dome, or in the shape of a hemicylindrical sheet and/or a half-dome. The lenses 805 are typically arranged in rows 810, 815, 820 wherein a greater number of lenses may be provided in the rows at the top 810 than at the bottom 815, 820, and the lenses at the top 810 are configured to capture IR radiation from entities further away and the lenses at the bottom 815, 820 are configured for capturing IR radiation from entities closer to, and more beneath, the PIR motion detector 120. The arrangement of the array of lenses 805 may be considered as defining an optical transfer function that defines a transformation of moving IR radiation received at the optical stage 705 to the IR radiation signal provided to the transducer stage 710.

The transducer stage 710 in Figure 7 is configured to receive the IR radiation from the optical stage 705 (specifically from the array of lenses 805) and convert the IR radiation into at least one corresponding electrical signal. The transducer stage 710 could comprise, for example as shown in Figure 8, a pyroelectric sensor module 825 having at least one pair of pyroelectric sensor elements 830, 835. The pyroelectric sensor elements 830, 835 may be arranged in series but having opposing polarities (i.e. arranged back to back), where each of the elements has a different perspective with respect to the field of view of the motion detector 120. The pyroelectric sensor module 825 is configured to output the electrical signal indicative of IR radiation received from the optical stage 705.

As shown in Figure 7, the processing stage 715 receives the electrical signal from the transducer stage 710 and is configured to determine if the electrical signal is indicative of motion, e.g. motion of a human. The processing stage 715 comprises a gain stage 720, a comparison stage 725 and an event processing stage 730.

The gain stage 720 applies a gain to the electrical signal from the transducer stage 710. The gain stage 720 has a gain and frequency response profile that may be represented by a gainstage transfer function. The gain may be modified uniformly across all frequencies, or non- uniformly, for example by changing the frequency response profile to have higher or lower gain at some frequencies compared with other frequencies. It will be appreciated that the gain stage 720 may be implemented by analog and/or digital electronics, and/or in software, using any technique known by the person skilled in the art.

Figure 9 shows an example of an output 905 of the gain stage 720 after the gain has been applied to the pyroelectric sensor module 825. As is known in the art, the configuration of the optical stage 705 together with the transducer stage 710 is such that a person walking across the field of view causes the electrical signal to have peaks 910 and troughs 915 respectively corresponding to different angular bearings of the person within the field of view of the motion detector 120. For example, a peak 910 and a trough 915 may correspond to each lens of the array of lenses 805 that a person passes, the peak 910 corresponding to received infrared light seen by one of the pyroelectric elements 930 through that lens when the person is at a first angular bearing and the trough corresponding to received infrared received seen by an oppositely polarized pyroelectric element 935 through that lens when the person is at a second angular bearing, which is generally adjacent the first angular bearing. The electrical signal output by the pyroelectric sensor module 825 therefore has a frequency that is dependent on angular speed of the person with respect the motion detector 120.

The output 905 from the gain stage 720 is provided to the comparison stage 725 that compares the output 905 from the gain stage 720 to one or more thresholds 920, 925 (see Figure 9) in order to identify occurrences of the output 905 of the gain stage 720 passing the one or more thresholds 920, 925. The comparison may be performed by electronics or in software. The comparison may be with respect to a single threshold. In other embodiments, such as that shown in Figure 9, a first threshold 920 and a second threshold 925 may be used for the peaks 910 and troughs 915 of the output 905 of the gain stage 720 that are respectively associated with the peaks and troughs of the output of the transducer stage 710. Further, optionally different thresholds may be used for different frequency ranges.

The event processing stage 730 receives indications of the respective threshold-crossing occurrences identified by the comparison stage 725 and applies defined or predefined logic specifying one or more conditions to be met to give a determination of the presence of motion, e.g. motion of a human, or otherwise. If the occurrences of threshold-crossings identified by the comparison stage 725 meet the conditions, then a signal indicative of motion detection 735 is output from the event processing stage 730. If the occurrences of threshold-crossings identified by the comparison stage 725 do not meet the conditions, then it is determined that no motion, e.g. motion of a human, is present. An example condition is that there must be more than a predetermined number N of threshold crossings within a predefined time window. For example, the predetermined number N may be greater than 1 to aid against identifying noise as motion. Optionally the predetermined number N is not more than 3. The predefined time window (T), may optionally be correlated with the low frequency cut-off (fc) of the PIR’ s frequency response, for example such that N / fc < T for an embodiment in which only a single threshold for comparing against only peaks or only troughs or in another example such that 2N / fc < T in embodiments in which two thresholds are used to compare peaks and troughs respectively.

As described above, the signal indicative of motion detection 735 output from the event processing stage 730 responsive to the occurrences of threshold-crossings identified by the comparison stage 725 meeting the conditions may be used to trigger a function which may optionally include at least triggering the sound processing module 145.

Figure 10 is a flowchart illustrating a method of operating the monitoring device 115 of Figures 2, 3 and 4 or the monitoring device 215 of Figure 5. The method may be implemented partly or wholly in software and/or may be implemented partly or wholly in hardware. For example, the method may be implemented in part or wholly by the processing module 175 or by one or more of: motion detection module 135, sound recognition unit 162, thresholder 150, processing module 172, image signal processor 620 and/or further processor 625.

At step 1005, the monitoring device 115 (or 215) is configured to detect motion in an environment at the premises 10. The detection of motion by the monitoring device 115, 215 is dependent on one or more parameters of the monitoring device 115, 215, wherein the one or more parameters are adjustable or selectable. The monitoring device 115, 215 is part of a system 100, 200 that provides security functionality and has an arming state associated with at least the motion detector 115, 215, at least the arming state relating to the security functionality.

At step 1010, the arming state associated with at least the monitoring device 115, 215 is identified. This is generally stored in the data store 180 of the monitoring device 115, 215 and updated regularly or as appropriate by the control hub 102 and/or the remote server 37.

If it is identified in step 1010 that the arming state is that at least the part of the system 100, 200 that comprises the monitoring device 115, 215 is the armed state then, in step 1015, at least one of the parameters of the monitoring device 115, 215 is set to values associated with the armed state, in which at least one of the parameters are set to a value that provides a lower acuity in detecting motion than a value for the corresponding at least one parameter used in the unarmed state. In examples, when in the armed state, the system 100, 200 is operable to provide security functionality, e.g. as part of a security system, for detecting intruders at the premises 10, e.g. as discussed above in relation to Figures 1 and 4. When in the armed state, in step 1020, the monitoring device 115, 215 detects motion in the environment using the one or more parameters associated with the armed state, i.e. in which at least one of the parameters are set to a value that provides a lower acuity in detecting motion than a value for the corresponding at least one parameter used in the unarmed state.

If it is identified in step 1010 that the arming state is that at least the part of the system 100, 200 that comprises the monitoring device 115, 215 is the unarmed state then, in step 1025, at least one of the parameters of the monitoring device 115, 215 is set to values associated with the unarmed state, in which at least one of the parameters are set to a value that provides a higher acuity in detecting motion than the value for the corresponding at least one parameter used in the armed state. In examples, when in the unarmed state, the system 100, 200 is operable to provide a different function that involves monitoring for motion at the premises than in the armed state. In examples, this may comprise monitoring for indications of peril of the user 110 (i.e. a resident or occupant of the premises 10), e.g. as discussed above in relation to Figures 3 or 5 and 11. In examples, the different function may comprise monitoring a region in which the user 110 may be, for example to identify subtle movements of the user and/or to trigger a home automation function of the system 100, which may be for example to operate a home-living device (e.g. a light, a heating/cooling system or a kitchen appliance, etc.) or to enable voice commanding to operate such a home-living device. When in the unarmed state, in step 1030, the monitoring device 115, 215 detects motion in the environment using the one or more parameters associated with the unarmed state, i.e. in which at least one of the parameters are set to a value that provides a higher acuity in detecting motion than a value for the corresponding at least one parameter used in the armed state.

Figure 11 shows an example of operation of the monitoring device 115, 215 when in the unarmed state. The monitoring device could be the monitoring devicel 15 of Figures 2, 3 and 4 or the monitoring device 215 of Figure 5. The method may be implemented partly or wholly in software and/or may be implemented partly or wholly in hardware. For example, the method may be implemented in part or wholly by the processing module 175 or by one or more of: motion detection module 135, sound recognition unit 162, thresholder 150, processing module 172, image signal processor 620 and/or further processor 625.

At step 1105 the monitoring device 115 monitors for motion using motion detector 120. If it is determined that motion is detected by the motion detection module 135 of the motion detector 120 (step 1110) then, response to motion being detected by the motion detector 120 of the monitoring device 115, the sound processing module 145 is triggered into an operational state (step 1115). In the operational state, the sound processing module 145 processes the output of the sound transducer 140 at least to determine if an audio trigger is represented therein (step 1120).

If no audio trigger is represented in the output of the sound transducer 140, then the sound processing module 145 continues to process the output of the sound transducer 140 to determine is an audio trigger is represented therein either an audio trigger is detected or until a predetermined time has expired (step 1125). If the predetermined time has expired without any audio triggers being detected then the sound processing module 145 is reverted back into a non-operational (e.g. lower power or off) state and the monitoring device 115 continues to monitor for motion using the motion detector 120 (step 1105).

If it is determined that the audio trigger is represented in the output of the audio transducer, then at step 1130 the image capturing unit 165 of the monitoring device 115 may be automatically triggered to capture at least one image. Optionally, the notification is also broadcast or multicast for any other camera containing components of the monitoring system 100 to collect images as described above.

The above examples are provided by way of illustrating how the invention might be put into practice, but other implementations could be provided that fall within the scope of the claims.

For example, the thresholder 150 may or may not be provided. In examples where the thresholder 150 is provided, it need not be provided using an electronic component such as an electronic comparator but may be implemented using any appropriate comparison means such as being implementable by a processor (e.g. by software operating thereon). Furthermore, steps of the methods illustrated in Figure 10 or Figure 11 may be performed by any suitable software or hardware or combination thereof, including any of the motion detection module 135, sound recognition unit 162, thresholder 150, processing modules 142 or 145, or any other processing arrangement, including discrete or distributed processing, that would be apparent to a skilled person.

Although in examples the system 100, 200 is described as detecting and responding to indications of peril when in the unarmed state, other functionality that involves monitoring for motion may alternatively or additionally be provided when the system 100, 200 is in the unarmed state. For example, when in the unarmed state, the system 100, 200 may be configured to monitor a state and/or activity of an occupant, or it could perform a home automation action such as turning on a light, amongst other possibilities.

In the unarmed state, the performance of the functionality may or may not be dependent on the detection of an audio trigger. Although, in examples, the audio trigger represents an indication of peril, this need not necessarily be the case. For example, in the unarmed state, detection of the audio trigger could instead trigger an alarm, or at least transmit a notification, without capturing another image, or cause the system 100, 200 to perform a home automation action, amongst other possible functionality.

Examples described herein may provide functionality when the system 100, 200 is in the unarmed state that involves detecting motion for which false motion detections are more tolerable than for the functionality provided when the system 100, 200 is in the armed state. For example, in the armed state, the system 100, 200 may be configured to provide security functionality that comprises threat detection (e.g. intruder detection) and/or an intervention action (e.g. an intruder deterrent action). These are potentially very significant actions and the tolerance of false motion detections may be relatively low. However, an intruder may be detected based on a person travelling across a space, typically by walking, which may correspond to movement over a greater distance and/or at a greater speed than other body movements that may be performed when a person is stationary or generally stationary. Therefore to detect an intruder a device does not necessarily need to detect a person changing their posture, moving only a part of their body, or only slightly adjusting their position (e.g. while remaining on an item of furniture for example), and having the device 115 so sensitive that it would detect such subtle movements may result in too much detection of noise and therefore too many false alarms. These more subtle movements might however be of interest to a device tailored to be responsive to a resident or other legitimate user of the system, who may be present and make such movements when the device 115 is not associated with an armed state.

For example, in the unarmed state, the system 100, 200 may be configured to provide different functionality such as detection of a user potentially being in peril, user instruction of home automation functions, user activity monitoring, and/or the like, which may be more tolerant of false motion detections, and improved responsiveness to subtle motions may be beneficial for providing such functionality. As such, in these cases, the motion detector may benefit from having at least one of the parameters used to detect motion in the unarmed state being set to a value that provides a higher acuity in detecting motion than a value for the corresponding at least one parameter used in the armed state. Furthermore, since the functionality provided in the unarmed state uses a cascading triggering arrangement in which the detection of motion is used to trigger a further triggering and detection step, e.g. triggering the processing of audio data to detect the presence of audio triggers, that in turn trigger performance of an ultimate action, then the effects of false motion detection may be reduced or eliminated whilst providing a high sensitivity of motion detection in the unarmed state. The at least one parameter may comprise one or more of: a gain, a sensitivity, a threshold level, a detected frequency range, a number of threshold crossings required to trigger a detection of motion, a frequency-gain profile, transfer function, and/or the like. The parameter that is set to a value in the unarmed state that provides a higher acuity in detecting motion than a value for the parameter in the armed state can thus be selected to provide increased acuity in motion detection in a way that suits the functionality being provided.

For example, the parameter in the form of a lowest frequency of a detected frequency range of a signal output from the motion detector can be lowered, or an increased gain or sensitivity (e.g. via lower threshold) can be selectively applied at lower frequencies, in the unarmed state relative to the armed state. This may provide increased sensitivity to slow user motions that typically give rise to lower frequency signals output from the motion detector, that might be desirable in detecting activity of a user but might also be indicative of noise such as pet or curtain motion etc., and triggering a motion detection event based on such noise may be particularly undesirable for security applications.

Conversely a highest frequency of a detected frequency range of a signal output from the motion detector can be increased, or an increased gain or sensitivity can be selectively applied at the highest frequencies, in the unarmed state relative to the armed state, in order to better detect faster motions.

In another example, the parameter in the form of a minimum number of threshold crossings to detect motion can be lowered. This may provide increased sensitivity to small user motions (i.e. motions travelling a shorter distance) that might be desirable in detecting activity of a user but might also be indicative of noise, and triggering a motion detection event based on such noise may be particularly undesirable for security applications. The disclosure is not limited to these examples of the parameters and other parameters of the monitoring device 115, 215 can be adjusted in order to provide a higher acuity to certain types of motion in the unarmed state relative to the armed state depending on the particular functionality being provided in the unarmed state. That is, different functions may benefit from better detection of different types of motion and/or inhibiting detection of certain types of motion. As the functionality or set of functionalities that use motion detection in the unarmed state are different to the functionality or set of functionalities that use motion detection in the armed state, the parameters used by the monitoring device 115, 215 to detect motion using the motion detector 120, 220 can be adjusted to better suit the different functionality or set of functionalities provided for each of the armed state and the unarmed state.

Method steps of the invention can be performed by one or more programmable processors executing a computer program to perform functions of the invention by operating on input data and generating output. Method steps can also be performed by special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit) or other customised circuitry. Processors suitable for the execution of a computer program include CPUs and microprocessors, and any one or more processors. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. Information carriers suitable for embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, e.g. EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in special purpose logic circuitry.

To provide for interaction with a user, the invention can be implemented with a device having a screen, e.g., a CRT (cathode ray tube), plasma, LED (light emitting diode) or LCD (liquid crystal display) monitor, for displaying information to the user and an input device, e.g., a keyboard, touch screen, a mouse, a trackball, and the like by which the user can provide input to the computer. Other kinds of devices can be used, for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input.