Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
IMAGE CAPTURE ARRANGEMENTS AND METHODS
Document Type and Number:
WIPO Patent Application WO/2024/068035
Kind Code:
A1
Abstract:
Provided is an automated method of capturing images with an image capture arrangement that is arranged to monitor a monitored area, the method comprising: detecting human presence in the monitored area using a thermal sensor; in response to detecting human presence in the monitored area, activating a range-detecting sensing arrangement to determine the range of any presence within the monitored area; activating the image sensor only if information received from the range-detecting sensing arrangement indicates presence within a target zone within the monitored area; and capturing images of the monitored area. Also provided is an image capture arrangement comprising a processor, and coupled to the processor: an image sensor to capture images of a monitored area; a thermal sensor to detect human presence in the monitored area; and a range-detecting sensing arrangement; wherein in a rest state the image sensor and the range-detecting sensing arrangement are powered down, and the processor is configured in at least one operating mode to respond to a signal from the thermal sensor that indicates human presence in the monitored area by powering up the range-detecting sensing arrangement to determine the range of any presence within the monitored area, and the processor is further configured to activate the image sensor only if information received from the range-detecting sensing arrangement indicates presence within a target zone within the monitored area.

Application Number:
PCT/EP2022/088059
Publication Date:
April 04, 2024
Filing Date:
December 30, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
VERISURE SARL (CH)
International Classes:
G08B13/196; G08B29/18
Domestic Patent References:
WO2022013863A12022-01-20
Foreign References:
US20210358293A12021-11-18
US9058523B22015-06-16
CN111899447A2020-11-06
US11232688B12022-01-25
Attorney, Agent or Firm:
DENNEMEYER & ASSOCIATES SA (DE)
Download PDF:
Claims:
Claims

1. An image capture arrangement comprising a processing arrangement, and coupled to the processing arrangement: a detection arrangement to detect motion in a monitored area, the processing arrangement and the detection arrangement being configured to determine whether detected motion could be from an object of interest; an image sensor to capture images of the monitored area; wherein in a rest state the image sensor is powered down, and the processing arrangement is, in at least one operating mode, configured based at least in part on a determination that detected motion could be from an object of interest to power up the image sensor to capture images of the monitored area, and the processing arrangement is further configured, if it is determined that an object of interest is within a target zone of the monitored area, to report and/or save the processed images; wherein determining whether the object of interest is within a target zone of the observed area is based either on ranging information provided by a ranging arrangement targeting the monitored area or on ranging information determined by performing range estimation on the captured images.

2. An image capture arrangement as claimed in claim 1, wherein the processing arrangement is further configured to process the captured images to determine the object(s) whose motion was detected and to determine whether the determined object(s) is an object of interest; and if it is determined that the object that triggered motion detection was an object of interest, to determine whether the object of interest is within a target zone of the observed area.

3. An image capture arrangement as claimed in claim 1 or claim 2, wherein the processing arrangement is further configured to determine whether the captured images include any outlying objects of interest, being objects of interest outside the target zone, and if there are any such outlying objects of interest to process the captured images to mask/blur/remove image portions corresponding to said outlying objects of interest.

4. An image capture arrangement as claimed in claim 1 or claim 2, wherein the monitored area has a near field zone and a far field zone beyond the near field zone, and wherein the processing arrangement is configured to: identify a presence of an object of interest, optionally a person, in the near field zone and, responsive to the identified presence; define a reserved region of the image corresponding to at least a portion of said object of interest, optionally a person, in the image; and obscure a second portion of the image outside the reserved region.

5. An image capture arrangement as claimed in claim 4, wherein the reserved region is a dynamic reserved region.

6. An image capture arrangement as claimed in claim 4 or claim 5, wherein the second portion of the image corresponds to at least a portion of the far field zone.

7. An image capture arrangement as claimed in any one of claims 4 to 6, wherein the processing arrangement is configured to identify a presence of a person in the far field zone, and to set the second portion of the image to correspond at least partly to the position in the image of the person in the far field zone, to obscure features of the person.

8. An image capture arrangement as claimed in any one of claims 4 to 7, wherein the processing arrangement is configured to apply a far-field mask to the image to define one or more pre-defined regions of the image associated with the far field, and wherein the second portion of the image corresponds to said one or more pre-defined regions excepting the reserved region.

9. An image capture arrangement as claimed in any one of the preceding claims, wherein the detection arrangement comprises one or more thermal sensors.

10. An image capture arrangement as claimed in claim 9, wherein the thermal sensor comprises a Thermal MOS, “TMOS”, sensor.

11. An image capture arrangement as claimed in claim 9, wherein the thermal sensor comprises a PIR sensor.

12. An image capture arrangement as claimed in any one of the preceding claims, wherein the ranging information is provided by a ranging arrangement that is integral with the image capture arrangement.

13. An image capture arrangement as claimed in claim 12, wherein the ranging arrangement comprises a time of flight detection system.

14. An image capture arrangement as claimed in claim 13, wherein the ranging arrangement comprises a radar arrangement.

15. An image capture arrangement as claimed in claim 14, wherein the radar arrangement is configured to provide directional information as well as ranging information.

16. An image capture arrangement as claimed in claim 14 or 15, wherein the processing arrangement is further configured to activate the image sensor only if information received from the radar arrangement indicates human presence within the target zone within the monitored area.

17. An image capture arrangement as claimed in any one of claims 1 to 11, wherein the ranging information is determined by performing range estimation on the captured images.

18. An image capture arrangement as claimed in any one of the preceding claims, wherein the processing arrangement, image sensor, thermal sensor, and range-detecting sensing arrangement are all housed in a common housing.

19. An image capture arrangement as claimed in claim 18 in the form of a video doorbell.

20. An image capture device as claimed in claim 18 in the form of a security camera for an alarm system.

21. An image capture arrangement as claimed in any one of claims 1 to 20, wherein in the at least one operating mode, the processing arrangement is further configured to activate the image sensor only if information received from the range-detecting sensing arrangement indicates a detected presence persisting within a target zone within the monitored area for more than a predetermined time.

22. An image capture arrangement as claimed in claim 21 wherein the predetermined time is at least 2 seconds, optionally at least 3 seconds, optionally at least 4 seconds, optionally at least 5 seconds, optionally at least 6 seconds, optionally at least 7 seconds, optionally at least 8 seconds, optionally at least 9 seconds, optionally at least 10 seconds.

23. An image capture arrangement as claimed in any one of claims 1 to 20, wherein in the at least one operating mode, the processing arrangement is further configured to activate the image sensor only if information received from the range-detecting sensing arrangement indicates a detected presence within a target zone within the monitored area having a trajectory corresponding to one or more predetermined criteria.

24. An image capture arrangement as claimed in claim 23, wherein one of the one or more predetermined is that a trajectory is classified as threatening.

25. An image capture arrangement as claimed in claim 24, wherein a trajectory along a path that arcs towards the image capture arrangement but which is generally transverse to a normal to the plane of the image sensor of the image capture arrangement is classified as non-threatening.

26. An image capture arrangement as claimed in claim 24 or claim 25, wherein a trajectory that leads towards the image capture arrangement is classified as threatening.

27. An image capture arrangement as claimed in any one of claims 23 to 26, wherein the processing arrangement is configured to apply machine learning or a trained neural network, or the like, to classify and distinguish between trajectories that are threatening and those that are not.

28. An image capture arrangement as claimed in any one of the preceding claims, wherein in the at least one operating mode, subsequent to activating the image sensor, the processing arrangement powers down the range-detecting sensing arrangement before powering down the image sensor.

29. An image capture arrangement as claimed in any one of claims 1 to 27, wherein in the at least one operating mode,, subsequent to activating the image sensor, the processing arrangement powers down the range-detecting sensing arrangement and the image sensor substantially simultaneously.

30. An image capture arrangement as claimed in any one of the preceding claims, wherein the range-detecting sensing arrangement comprises a radar system, optionally a low-power radar system.

31. An image capture arrangement as claimed in claim 30, wherein the radar system operates in the 60GHz band.

32. An image capture arrangement as claimed in any one claims 1 to 29, wherein the range-detecting sensing arrangement comprises a time-of-flight detection system, optionally a system based on the use of ultrasound or light.

33. An image capture arrangement as claimed in any one of the preceding claims, wherein the image sensor has a longer capture range than the thermal sensor and/or the range-detecting sensing arrangement.

34. An image capture arrangement coupled to a premises security monitoring system, the image capture arrangement optionally according to any preceding claim, the image capture arrangement including an image sensor and a processor, the processor of the image capture arrangement being configured to modify its behaviour depending upon a reported arm status of the security monitoring system.

35. An image capture arrangement as claimed in claim 34, further comprising a rangedetecting sensing arrangement.

36. An image capture arrangement as claimed in claim 35, wherein the processor of the image capture arrangement is configured: in the event that the security monitoring system is in a disarmed state, to avoid activating the image sensor despite the information received from the range-detecting sensing arrangement indicating human presence within a target zone within a monitored area; and, in the event that the security monitoring system is in an armed state, to activate the image sensor if the information received from the range-detecting sensing arrangement indicates human presence within the target zone within the monitored area.

37. An automated method comprising: i) detecting motion in an observed area; ii) determining whether the detected motion could be from an object of interest; iii) based at least in part on ii) waking a camera and capturing video or multiple images of the observed area; vi) if it is determined that an object of interest is within the target zone, reporting or saving the processed images; wherein the determining of step vi) is based either on ranging information provided by a ranging arrangement targeting the observed area or on ranging information determined by performing range estimation on the captured images.

38. The method of claim 37, further comprising processing the captured images to determine the object(s) whose motion was detected and to determine whether the determined object(s) is an object of interest; and if it is determined that the object that triggered motion detection was an object of interest determining whether the object of interest is within a target zone of the observed area. 39. The method of claim 37 or 38, carried out on a peripheral device of a security system.

Description:
Image capture arrangements and methods

Technical field

The present invention relates to an image capture arrangement, security monitoring systems including image capture arrangements, methods including methods of capturing images with an image capture arrangement, and the processing of captured images.

It is nowadays commonplace to include an image capture arrangement such as a stillimage or video camera in security monitoring systems for domestic and commercial premises. Although in many situations such image capture arrangements may be mains powered, in some situations - particularly in domestic installations, it is necessary to rely on battery power. But even when an image capture arrangement is fed from a mains power supply, a battery power supply may be provided as backup for use in the event that the mains power supply fails for whatever reason. For example, it is not unusual for a villain to cause a failure of the mains supply as part of a burglary or other intrusion, so it is important to provide an alternative power source so that images of any intruders and their actions can be captured for immediate or subsequent use. If the battery power supply (when principal or backup) is exhausted, the image capture device cannot perform its primary function, so it is important to manage the power consumption of the image capture arrangement. Similar problems exist with the now popular video doorbells, as often for ease of installation these rely solely on battery power rather than having a mains-fed power supply, and in any event mains power failures need to be catered for. Unfortunately, capturing and transmitting high resolution images, especially video images, are quite power-hungry operations. This poses a problem for maintaining battery life. Moreover, there exists a general human interest in reducing power consumption of devices generally, whether or not they are reliant on a battery power supply.

There therefore exists a need for improved management of energy consumption in image capture arrangements.

Moreover, the EU General Data Protection Regulation (GDPR) regulates data protection and privacy for individuals, and since its adoption in 2016 has served as a model for similar legislation in many countries outside Europe. The GDPR applies to the processing of “personal data” which is defined very broadly as “any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person”. Included within this definition are images, such as photographs and video relating to an identified or identifiable natural person. The Regulation does not apply to the processing of personal data “by a natural person in the course of a purely personal or household activity”, and on this basis it is believed that the capture and (local) storage of images from domestic security systems may be outside the scope of the Regulation. However, if the images captured are of individuals not on the property or premises protected by the domestic security systems - for example if the images include people on the street or in communal areas of a multi-occupancy building, then the captured images are likely to be subject to the Regulation. This poses a potential problem for the owners/users of domestic security systems and, for example, video doorbell installations. In the UK this issue came to prominence in 2021 when a dispute between neighbours over the installation of a domestic security system and a Ring (RTM) doorbell went to Court, resulting in the neighbour with the security system being found to have breached the GDPR and liable for a substantial fine and damages.

Video doorbells and domestic security systems continue to increase in popularity, despite the risk of (inadvertently) breaching the GDPR (or the local equivalent outside Europe). There therefore exists a need to provide safeguards to reduce the risk of inadvertently breaching the GDPR or other equivalent legislations by installing and operating domestic security monitoring systems and/or video doorbells.

The present application proposes various solutions relevant to one or both of these problem areas. The various solutions may be applied to addressing one or other of these problem areas, but in many instances the solutions may be applicable to both problem areas.

Summary

According to a first aspect there is provided an image capture arrangement comprising a processor, and coupled to the processor: an image sensor to capture images of a monitored area; a thermal sensor to detect human presence in the monitored area; and a range-detecting sensing arrangement; wherein in a rest state the image sensor and the range-detecting sensing arrangement are powered down, and the processor is configured in at least one operating mode to respond to a signal from the thermal sensor that indicates human presence in the monitored area by powering up the range-detecting sensing arrangement to determine the range of any human presence within the monitored area, and the processor is further configured to activate the image sensor if (optionally selectively if, optionally only if) information received from the rangedetecting sensing arrangement indicates human presence within a target zone within the monitored area.

Optionally, the processor is further configured to provide information received from the range-detecting sensing arrangement as an input to an image processing operation performed on images captured by the image sensor. Optionally, the image processing operation involves obscuring image portions, optionally image portions that include human features of any human determined to be more than a threshold distance from the image capture arrangement.

Optionally, the processor is configured to perform the image processing operation.

Optionally, the image capture arrangement further comprises an RF transceiver, and the processor may be configured to use the RF transceiver to transmit image data and information received from the range-detecting sensing arrangement to a remote processor for the remote processor to perform the image processing operation.

Optionally, the thermal sensor comprises a Thermal MOS, “TMOS”, sensor.

Optionally, the thermal sensor comprises a PIR sensor.

Optionally the processor, image sensor, thermal sensor, and range-detecting sensing arrangement are all housed in a common housing. Such an arrangement, housed in a common housing may for example be provided in the form of a video doorbell or may be provided in the form of a security camera for an alarm system.

Where used herein (throughout this specification, including in the claims), the term “video doorbell” refers to any device that includes video doorbell functionality, whether or not the device also includes other functionality, such as access device functionality, and/or security camera functionality.

Optionally, in the first operating mode the processor may further be configured to activate the image sensor only if information received from the range-detecting sensing arrangement indicates human presence persisting within a target zone within the monitored area for more than a predetermined time. In such an image capture arrangement the predetermined time may be at least 2 seconds, optionally at least 3 seconds, optionally at least 4 seconds, optionally at least 5 seconds, optionally at least 6 seconds, optionally at least 7 seconds, optionally at least 8 seconds, optionally at least 9 seconds, optionally at least 10 seconds.

Optionally, subsequent to activating the image sensor, the processor may be configured to power down the range-detecting sensing arrangement before powering down the image sensor.

Optionally, in the first mode, subsequent to activating the image sensor, the processor may be configured to power down the range-detecting sensing arrangement and the image sensor substantially simultaneously.

Optionally, the range-detecting sensing arrangement comprises a radar system, which may be a low-power radar system. The radar system may operate in the 60GHz band.

Optionally, the range-detecting sensing arrangement comprises a time-of-flight detection system, optionally a system based on the use of ultrasound or light.

Optionally the range detecting arrangement for any of the embodiments of this disclosure comprises one or more selected from: a radar arrangement; a time-of-flight sensing arrangement; an ultrasonic or optical sensor; an arrangement for detecting range information based at least partly on the images captured by the image capture arrangement. Doppler detection may be used for movement detection in particular using an ultrasonic or optical sensor. Also just getting a reflection of ultrasound or light off a target can provide an indication of proximity.

Optionally, the image sensor has a longer capture range than the thermal sensor and/or the range-detecting sensing arrangement.

In a second aspect there is provided an image capture arrangement coupled to a premises security monitoring system, the image capture arrangement optionally including any of the features of the first aspect, the image capture arrangement including an image sensor and a processor, the processor of the image capture arrangement being configured to modify its behaviour depending upon a reported arm status of the security monitoring system.

The image capture arrangement according to the second aspect may include a rangedetecting sensing arrangement.

In the image capture arrangement according to the second aspect, the processor of the image capture arrangement may be configured: in the event that the security monitoring system is in a disarmed state, to avoid activating the image sensor despite the information received from the range-detecting sensing arrangement indicating human presence within a target zone within a monitored area; and, in the event that the security monitoring system is in an armed state, to activate the image sensor if the information received from the range-detecting sensing arrangement indicates human presence within the target zone within the monitored area.

According to a third aspect there is provided an automated method of controlling an image capture arrangement that is arranged to monitor a monitored area, the image capture arrangement optionally according to any preceding aspect, the method comprising: detecting human presence in the monitored area using a thermal sensor; in response to detecting human presence in the monitored area, activating a sensing arrangement to determine the range of human presence within the monitored area; and activating the image sensor if (for example, selectively if, optionally only if) information received from the range-detecting sensing arrangement indicates presence within a target zone within the monitored area.

According to a fourth aspect there is provided an automated method of capturing images with an image capture arrangement that is arranged to monitor a monitored area, the image capture arrangement optionally according to the first or second aspect above, the method comprising: detecting human presence in the monitored area using a thermal sensor; in response to detecting human presence in the monitored area, activating a range-detecting sensing arrangement to determine the range of any human presence within the monitored area; activating the image sensor if (for example, selectively if, optionally only if) information received from the range-detecting sensing arrangement indicates presence within a target zone within the monitored area; and capturing images of the monitored area.

The method of the third or fourth aspect may further comprise performing an image processing operation performed on images captured by the image sensor using information received from the range-detecting sensing arrangement as an input.

Optionally, the image processing operation may involve obscuring image portions that include human features of any human determined to be more than a threshold distance from the image capture arrangement.

Optionally, the image processing operation may involve obscuring image portions that include human features of any human determined to be outside the target zone.

The method of the third or fourth aspect may further comprise transmitting to a remote processor image data from the image capture arrangement and information received from the range-detecting sensing arrangement.

According to a fifth aspect there is provided an image capture arrangement, the image capture arrangement optionally according to the first or second aspect above, the image capture arrangement comprising a processor, and coupled to the processor: an image sensor to capture images of a monitored area; a thermal sensor to detect human presence in the monitored area; and a radar arrangement; wherein in a rest state the image sensor and the radar are powered down, and the processor is configured in at least one operating mode to respond to a signal from the thermal sensor that indicates human presence in the monitored area by powering up the radar arrangement to determine the range of detected and/or human presence within the monitored area, and the processor is further configured to activate the image sensor if, (for example selectively if, optionally only if) information received from the radar arrangement indicates presence within a target zone within the monitored area

According to a sixth aspect there is provided a method of controlling an image capture arrangement that is arranged to monitor a monitored area, the image capture arrangement optionally according to the first, second or fifth aspect, the method comprising: detecting human presence in the monitored area using a thermal sensor; in response to detected presence in the monitored area, activating a radar arrangement to determine the range of any detected and/or human presence within the monitored area; and activating the image sensor if, (for example, selectively if, optionally only if) information received from the radar arrangement indicates human presence within a target zone within the monitored area.

According to a seventh aspect there is provided a method of capturing images with an image capture arrangement that is arranged to monitor a monitored area, the image capture arrangement optionally according to the first, second or fifth aspect, the method comprising: detecting human presence in the monitored area using a thermal sensor; in response to detecting presence in the monitored area, activating a radar arrangement to determine the range of detected an/or human presence within the monitored area; activating the image sensor if (for example, selectively if, optionally only if) information received from the radar arrangement indicates human presence within a target zone within the monitored area; and capturing images of the monitored area.

According to an eighth aspect, an image capture arrangement and/or a method of capturing images is provided wherein a distance measuring sensor is used to provide range information to an object of interest in a monitored area. The range information is used as an input to control selective power-up of one or more functions and/or functional modules.

In some embodiments, the distance measuring sensor is activated in response to or dependent on a signal from a presence detecting sensor, for example, a thermal and/or infrared sensor, optionally a PIR or a TMOS sensor.

Additionally or alternatively, in some embodiments, the range information is used as an input to control selective power-up of an image capture image sensor for capturing an image and/or selective power-up of an image processor for processing a captured image.

In some embodiments, the range information is additionally used as an input parameter for image processing, for example, to generate an image targeting an object of interest based on the range information, while removing information in one or more other regions or zones of the image.

According to a ninth aspect, an image capture arrangement for a security system, comprises an image sensor to capture images of a surveillance area, and a processing arrangement to perform image processing on images captured by the image capture arrangement, wherein the surveillance area includes a primary zone and at least one secondary zone distinct from the primary zone, and wherein the processing arrangement is configured: to produce a processed image in which at least one or more image portions representing features in one or more of said at least one secondary zones, is obscured; wherein in the event of determining the presence of an object of interest within the primary zone, the processed image is produced with features of the object of interest within the primary zone unobscured.

A tenth aspect of the invention provides an image capture arrangement for a security system, optionally including any of the features described above, the image capture arrangement comprising an image sensor to capture images of a surveillance area, and a processing arrangement to perform image processing on images captured by the image capture arrangement, wherein the processing arrangement is configured to obscure at least one region by a non- blanking technique that preserves at least a portion of image information in said region. A non- blanking technique is for example an obscuration technique that does not obscure the totality of the information originally present in the region obscured - and contrasts with a blanking technique in which all information originally present in the region obscured is either removed or supressed to produce a blank image zone - e.g. to produce a monochrome image zone devoid of all original information content, such as by imposing a single-tone black or white (or any single - tone/hue colour) over the totality of the effected zone.

An eleventh aspect of the invention provides an image capture arrangement for a security system, optionally including any of the features described above, the image capture arrangement comprising an image sensor to capture images of a surveillance area, and a processing arrangement to perform image processing on images captured by the image capture arrangement, wherein the processing arrangement is configured to obscure at least one region by any of blurring, painting over, applying a linear algorithm to increase complexity, optionally a box blur or a Gaussian blur, mixing and mirroring, distorting the image such as by imposing twirls or waves on the image, or some combination thereof.

A twelfth aspect of the invention provides an image capture arrangement for a security system, optionally including any of the features described above, the image capture arrangement comprising an image sensor to capture images of a surveillance area, and a processing arrangement to perform image processing on images captured by the image capture arrangement, wherein the processing arrangement is configured to obscure at least one region using a technique that is effective to remove information permitting the identification of a person, such as facial features, but which retains image information from which a viewer is provided with positional information, optionally enabling spatial awareness on the position of image objects with respect to each other and/or with respect to the image capture arrangement.

Optionally the technique is such that the removed information is substantially non- recoverable in the processed image.

A thirteenth aspect of the invention provides an image capture arrangement for a security system, optionally including any of the features described above, the image capture arrangement comprising an image sensor to capture images of a surveillance area, and a processing arrangement to perform image processing on images captured by the image capture arrangement, wherein the processing arrangement is configured to obscure a first region of the image using a first obscuration technique, and to obscure a second region of the image using a second obscuration technique, the second obscuration technique being different from the first, and the second obscuration technique preserving at least some information from the image in the second region. A fourteenth aspect of the invention provides a method performed by an image capture arrangement of a security system, the image capture arrangement comprising an image sensor to wherein the surveillance area includes a primary zone and at least one secondary zone distinct from the primary zone, the method comprising: producing a processed image in which at least one or more image portions representing features in one or more of said at least one secondary zones, is obscured; wherein in the event of determining the presence of an object of interest within the primary zone, producing the processed image with features of the object of interest within the primary zone unobscured.

A fifteenth aspect of the invention provides a method performed by an image capture arrangement of a security system, optionally including any of the features described above, the method comprising obscuring at least one region by a non-blanking technique that preserves at least a portion of image information in said region.

A sixteenth aspect of the invention provides a method performed by an image capture arrangement of a security system, optionally including any of the features described above, the method comprising obscuring at least one region by any of blurring, painting over, applying a linear algorithm to increase complexity, optionally a box blur or a Gaussian blur, mixing and mirroring, distorting the image such as by imposing twirls or waves on the image, or some combination thereof.

A seventeenth aspect of the invention provides a method performed by an image capture arrangement of a security system, optionally including any of the features described above, the method comprising obscuring at least one region using a technique that is effective to remove information permitting the identification of a person, such as facial features, but which retains image information from which a viewer is provided with positional information, optionally enabling spatial awareness on the position of image objects with respect to each other and/or with respect to the image capture arrangement.

Optionally, the processing is such that the removed information is substantially non- recoverable in the processed image.

An eighteenth aspect of the invention provides a method performed by an image capture arrangement of a security system, optionally including any of the features described above, the method comprising obscuring a first region of the image using a first obscuration technique, and to obscure a second region of the image using a second obscuration technique, the second obscuration technique being different from the first, and the second obscuration technique preserving at least some information from the image in the second region.

A nineteenth aspect of the invention provides a method performed by an image capture arrangement of a security system, optionally including any of the features described above, wherein the surveillance area includes a primary zone and at least one secondary zone distinct from the primary zone, the method comprising activating an image sensor of the image capture arrangement only if information received from a range-detecting sensing arrangement indicates human presence persisting within the primary zone within the surveillance area for more than a predetermined time.

A twentieth aspect of the invention provides an image capture arrangement comprising a processing arrangement, and coupled to the processing arrangement: a detection arrangement to detect motion in a monitored area, the processing arrangement and the detection arrangement being configured to determine whether detected motion could be from an object of interest; an image sensor to capture images of the monitored area; wherein in a rest state the image sensor is powered down, and the processing arrangement is, in at least one operating mode, configured based at least in part on a determination that detected motion could be from an object of interest to power up the image sensor to capture images of the monitored area, and the processing arrangement is further configured, if it is determined that an object of interest is within a target zone of the monitored area, to report and/or save the processed images; wherein determining whether the object of interest is within a target zone of the observed area is based either on ranging information provided by a ranging arrangement targeting the monitored area or on ranging information determined by performing range estimation on the captured images.

Optionally, in the at least one operating mode, the processing arrangement is further configured to activate the image sensor only if information received from the range-detecting sensing arrangement indicates a detected presence within a target zone within the monitored area having a trajectory corresponding to one or more predetermined criteria.

Optionally, one of the one or more predetermined is that a trajectory is classified as threatening. Optionally, a trajectory along a path that arcs towards the image capture arrangement but which is generally transverse to a normal to the plane of the image sensor of the image capture arrangement is classified as non- threatening. Optionally, a trajectory that leads towards the image capture arrangement is classified as threatening.

Optionally, the processing arrangement is configured to apply machine learning or a trained neural network, or the like, to classify and distinguish between trajectories that are threatening and those that are not.

A twenty-first aspect of the invention provides an automated method comprising: i) detecting motion in an observed area; ii) determining whether the detected motion could be from an object of interest; iii) based at least in part on ii) waking a camera and capturing video or multiple images of the observed area; iv) if it is determined that an object of interest is within the target zone, reporting or saving the processed images; wherein the determining of step iv) is based either on ranging information provided by a ranging arrangement targeting the observed area or on ranging information determined by performing range estimation on the captured images.

A twenty-second aspect of the invention provides an image capture arrangement for a security system, optionally including any of the features described above, the image capture arrangement comprising an image sensor to capture images of a surveillance area, and a processing arrangement to perform image processing on images captured by the image capture arrangement, wherein the surveillance area has a primary zone and at least one secondary zone distinct from the primary zone, and wherein the processing arrangement is configured to: identify the presence of an object of interest, optionally a person, in the primary zone and, responsive to the identified presence; define a reserved region of the image corresponding to at least a portion of said object of interest, optionally a person, in the image; and obscure a second portion of the image outside the reserved region.

In some embodiments, the primary zone corresponds to a near field zone of the surveillance area, and the at least one secondary zone corresponds to a far field zone beyond the near field zone, optionally wherein the or each secondary zone corresponds to a said far field zone.

In some embodiments, the reserved region has at least one boundary dependent on boundary information for the object of interest.

In some embodiments, the second portion of the image corresponds to at least a portion of the secondary zone.

In some embodiments, the processing arrangement is configured to identify the presence of a person in the secondary zone, and to set the second portion of the image to correspond at least partly to the position in the image of the person in the secondary zone, to obscure features of the person.

In some embodiments, the processing arrangement is configured to apply a far-field mask to the image to define one or more pre-defined regions of the image associated with the far field, and wherein the second portion of the image corresponds to said one or more pre-defined regions excepting the reserved region.

In some embodiments, the processing arrangement is configured to identify the presence of the object of interest, optionally a person, in the primary zone based on range information associated with the object of interest.

In some embodiments, the range information is obtained from one or more selected from: processing of the image to derive range information; a signal from a range detecting arrangement. In some embodiments, the range determining arrangement comprises one or more selected from: a radar arrangement; a time-of-flight sensor; an ultrasonic sensor; an optical sensor.

In some embodiments, the range information includes information on direction as well as distance.

In some embodiments, the range information includes size information on the size of a detected object.

In some embodiments, the processing arrangement is configured to use the range information, and optionally size information, in determining whether an object of interest whose image has been captured is within or beyond the near field zone.

In some embodiments, the processing arrangement is configured to obscure (i) image portions representing non-human objects located in the secondary zone, optionally beyond a near field zone, and/or (ii) to obscure some but not all of the secondary zone image portion; and/or to obscure at least a majority of the secondary zone image portions; and/or o to obscure all of the secondary zone image portions.

In some embodiments, the processing arrangement is configured to define a bounding box around an image portion representing a person within the near field zone.

In some embodiments, the processing arrangement is configured to obscure at least a portion of the image outside the or each bounding box.

In some embodiments, the processing arrangement is also configured to define a bounding box around any image portion representing a person in a secondary zone, optionally a far field zone.

In some embodiments, the processing arrangement is configured to obscure the content of any bounding box around any image portion representing a person in a secondary zone, optionally a far field zone.

In some embodiments, the processing arrangement and the image capture arrangement are provided in a common housing.

In some embodiments, the image capture arrangement is provided in a housing together with an RF transceiver, the RF transceiver providing a communications link to the remote processing arrangement.

In some embodiments, the image capture arrangement is provided by the video camera of a video doorbell.

In some embodiments, the processing arrangement is configured to obscure at least one region by a non-blanking technique that preserves at least a portion of image information in said region. In some embodiments, the processing arrangement is configured to obscure at least one region by blurring, darkening, painting over, applying a linear algorithm to increase complexity, optionally a box blur or a Gaussian blur, mixing and mirroring, distorting the image such as by imposing twirls or waves on the image, or some combination thereof.

In some embodiments, the processing arrangement is configured to obscure at least one region using a technique that is effective to remove information permitting the identification of a person, such as facial features, but which retains image information from which a viewer is provided with positional information, optionally enabling spatial awareness on the position of image objects with respect to each other and/or with respect to the image capture arrangement.

In some embodiments, the processing arrangement is configured to obscure a first region of the image using a first obscuration technique, and to obscure a second region of the image using a second obscuration technique, the second obscuration technique being different from the first, and the second obscuration technique preserving at least some information from the image in the second region.

In some embodiments, the image capture arrangement is configured to store zone data defining the primary zone, optionally near field zone and the or each secondary zone, optionally far field zone, the zone data having been acquired during an initial setup process.

In some embodiments, the processing arrangement is configured to store an image file corresponding to the processed image.

In some embodiments, the image capture arrangement further comprises a transceiver wherein the processing arrangement is configured to use the transceiver to output an image file corresponding to the processed image.

In some embodiments, the processing arrangement is configured to activate the image sensor selectively if, optionally only if, information received from the range-detecting sensing arrangement indicates presence, optionally human presence, persisting within the primary zone within the surveillance area for more than a predetermined time.

In some embodiments, the predetermined time is at least 2 seconds, optionally at least 3 seconds, optionally at least 4 seconds, optionally at least 5 seconds, optionally at least 6 seconds, optionally at least 7 seconds, optionally at least 8 seconds, optionally at least 9 seconds, optionally at least 10 seconds.

In some embodiments, the processing arrangement is configured to analyse, additionally or alternatively, a trajectory of an object of interest in the monitored area, and to activate the image sensor in response to detected trajectory.

In some embodiments, the image capture arrangement is configured as a video doorbell, or in the form of a security camera for an alarm system. Preferably, in any of the foregoing aspects, when images are processed in support of compliance with GDPR (or equivalent) or otherwise to remove information, the processing is such that the information removal cannot be reversed so that the information removed is non- recoverable. There may be occasions however where it is desired to use a reversible obfuscation mechanism- for example on image portions that correspond to the grounds of protected premises. And at yet other times one may be interested in mechanisms that are substantially irreversible.

In certain aspects of the invention image processing may involve generating substantially non-reversible (non-recoverable) obscuration to reduce the risk that certain information can be recovered from obscured images.

Advantages of the ideas described herein include reducing occurrences of unnecessary powering-up of functions that involve relatively heavy power consumption, and that may otherwise result from signals from the thermal detector corresponding to detection events that are too far from the image capturing arrangement to qualify as being of interest. Detection of range information can provide useful information to control powering-up or activation of such functions or functional modules. This can lead to significant reduction in power consumption, and significantly prolong battery life in the case of battery-powered devices.

Brief description of Figures

Embodiments of the invention will now be described, by way of example only, with reference to the accompanying Figures, in which:

Figure 1 shows a view of the front of a premises protected by a security monitoring system according to an aspect of the present invention. ;

Figure 2 is a schematic part plan view of a premises protected by security monitoring system according to an aspect of the invention;

Figure 3 illustrates schematically the major components of an image capture arrangement according to an aspect of the invention in the form of a video doorbell;

Figure 4 is a plan view of the grounds of the premises of Figure 1 showing the relationship between the house and its environment;

Figure 5 illustrates schematically operation of the detection system of an image capture arrangement, such as that shown in Figure 3 in which ranging is performed by image analysis; Figure 6 shows a detail of Figure 5 representing the initial wake up detection subsystem;

Figure 7 shows a detail of Figure 5 representing the motion detection subsystem;

Figure 8 shows a detail of Figure 5 representing the human object detection subsystem; Figure 9 shows a detail of Figure 5 representing range estimation by computer vision;

Figure 10 corresponds to Figure 5 but represents the use of a different type of low power sensor; Figure 11 corresponds generally to Figure 5 but here, instead of using image analysis to determine the range of objects a dedicated ranging system is used;

Figure 12 corresponds closely with Figure 11 but here the primary sensor is a TMOS sensor; Figure 13 shows the use of radar to provide ranging information that is used in determining whether to wake the camera;

Figure 14 corresponds to Figure 13 but with the use of a TMOS sensor in place of the PIR arrangement;

Figure 15 corresponds to Figure 5, differing only in that in this variant of the method the computer vision range estimation is no longer required to detect motion inside the target area; Figure 16 is a timeline diagram illustrating activation of the image sensor of an image capture arrangement based on the use of radar;

Figure 17 is a timeline diagram illustrating activation of the image sensor of an image capture arrangement based on the use of a hardware ranging arrangement;

Figure 18 illustrates schematically an example of segmenting the image provided by an image capture device according to aspects of the invention;

Figure 19 shows a potential view from the video doorbell of Figures 1 to 4;

Figure 20 corresponds to Figure 19 but shows details that may be captured in an image from the video doorbell;

Figure 21 corresponds to Figure 20 but shows people at closer proximity to the video doorbell; Figure 22 corresponds to Figure 20 but shows the use of a bounding box around the people at closest proximity to the video doorbell; and

Figure 23 shows schematically different behaviours which may be detected by a thermal/low power detector and possibly also by a ranging arrangement.

Specific description

Figure 1 shows a view of the front of a premises 100 protected by a security monitoring system according to an aspect of the present invention. The premises, here in the form of a house, have an exterior door, here front door, 102. The door gives access to a protected interior space. The security monitoring system secures at least part of a perimeter to the premises 100, and the door constitutes an exterior closure 102 in the secure perimeter giving access to a protected interior space 200 of the premises. A lock 104 on the exterior door is optionally electrically controlled so that it can be locked and unlocked remotely.

To the side of the door, on the facade of the house, is a first video camera in the form of a video doorbell 106 which looks out from the facade of the premises so that anyone approaching the door along the path 108 can be seen, and in particular when a visitor stands at the door their face should clearly be visible. The video doorbell includes an actuator, e.g. a push button, for a visitor to indicate their presence at the closure. The video doorbell also includes an audio interface to enable bidirectional audio communication with a visitor at the closure 102.

As is conventional, the video doorbell preferably includes an infrared light source to illuminate whatever is in front of the video doorbell. Optionally, as shown, the facade of the house also carries an external keypad (aka “access device”) 110 by means of which a user can disarm the security monitoring system, and unlock the lock 104. Also shown is an optional second video camera 112 which is coupled to a presence and/or movement detector 114. The detector may optionally be a thermal detector, for example a PIR sensor. The second video camera 112 may be arranged when the security monitoring system is armed, to capture video of the approach to the house and/or the private area, e.g. the garden, to the front or the car-parking space to the side of the house and signal an alarm event to a controller of the security monitoring system. As with the doorbell camera, the second video camera is preferably provided with an audio interface to enable bidirectional audio communication with anyone observed by the second video camera. Although the first video camera is illustrated in the form of a video doorbell, the first video camera may additionally or alternatively have the features described above for the second video camera, whether or not plural video cameras are used.

Figure 2 is a schematic part plan view of a premises 100 protected by security monitoring system according to an aspect of the invention, together with other elements of the system, corresponding generally to the premises of figure 1. The front door 102, with electrically controlled lock 104, leads into the protected interior space 200 of the premises. Each of the windows 202, and the rear door 204 is fitted with a sensor 206 to detect when they are opened. Each of the sensors 206 includes a radio transceiver to report events to a controller , or central unit, 208 of the security monitoring system. If one of the sensors 206 is triggered when the system is armed, a signal is sent to the central unit 208 which in turn may signal an alarm event to a remote central monitoring station 210. The central unit 208 is connected to the remote central monitoring station 210 via the Internet 212, either via a wired or a wireless connection. Also wirelessly coupled to the central unit 208 are the video doorbell 106, the electrically controlled lock 104, and if present the second video camera 112, its associated presence and/or movement detector 114 (although the latter may be integral with the second video camera 112) and the audio interface 116. These items, and the sensors 206, are preferably coupled to the central unit 208 using transceivers operating in the industrial scientific and medical (ISM) bandwidths, for example a sub-gigahertz bandwidth such as 868 MHz, and the communications are encrypted preferably using shared secret keys. Preferably the central unit 208 and each image source (e.g. video camera, other camera, or video doorbell) also includes a transceiver to provide Wi-Fi connectivity, e.g. for the sharing of images and video as timely transmission of image and video files requires a relatively large bandwidth, at least for the kinds of high resolution images which are typically wanted in security monitoring installations. Preferably the Wi-Fi or other large bandwidth transceiver is provided in addition to the low-bandwidth transceiver (e.g. ISM transceiver) used for the transmission and reception of control signals and event notifications and the like. The security monitoring system may also include other sensors within the protected interior space, such as an interior video camera 214 and associated movement detector 216 (which again may be integral with the camera 214), and each of the interior doors 218 may also be provided with a sensor 206 to detect the opening/closing of the door. Also shown in figure 2 are a user device 220, preferably loaded with an appropriate app - as will be described later, and a public land mobile network (PLMN) by means of which the central monitoring station 210, and the central unit 208, may communicate with the user device 220.

Operation of the security monitoring system may be controlled by one or more of: the controller 208, the remote monitoring station 210, and a security monitoring app installed on the user device 220. An access device 110 may be provided outside the protected premises, e.g. on an exterior wall of the premises, optionally adjacent a main entrance, such as the front door 102, to enable an occupier to arm and disarm the security monitoring system and optionally also to lock and unlock a lock (e.g. 104) in the relevant door, e.g. to gain admittance to the premises.

The remote monitoring station 210, if provided, may receive one or more signals from any of the first camera and/or video doorbell 106, the second camera 112, the keypad 110, the sensors 206 and/or 520 (described in more detail later). The remote monitoring station 210 may transmit commands for controlling any one or more of: the arm state of the alarm system (e.g. armed or unarmed); commanding a tripped alarm state to be signalled by the alarm system (e.g. by triggering one or more sirens to generate alarm noise); commanding a lock state of the door lock 104 (e.g. locked or unlocked), commanding operation of one or more functions of the video doorbell 106, commanding operation of one or more cameras to transmit images to the remote monitoring unit. Communication with the remote monitoring station 210 may pass through the controller 208, as described above. In other embodiments without the remote monitoring station 210, or in the event that communication with the remote monitoring station 210 is interrupted, operation of the alarm system may be controlled by the controller 208. In yet other embodiments, the controller 208 may be omitted, and the individual peripheral devices may communicate directly with the remote monitoring station 210.

The security monitoring system app is installed on a user device 220, here shown as a smartphone, although of course it could be almost any kind of electronic device, such as a laptop or desktop computer, a tablet such as an iPad, a smart watch, or even a television. Having set the scene for the invention, we will now consider an approach according to aspects of the invention that can help reduce power consumption in image capture arrangements. Figure 3 illustrates schematically the major components of an image capture arrangement 300 in the form of a video doorbell, for example a doorbell 106 as shown in Figure 1 and 2. A processing arrangement 302 (hereinafter “processor”), which may be an MCU, a microprocessor or a collection of processors or more than one MCU, is coupled to an image sensor 304, here in the form of a video camera, a low power and/or thermal sensor 306 which may for example be a thermal sensor for detecting presence e.g. by detecting motion. Also coupled to the processor 302 is an optional ranging arrangement 308, for example a radar arrangement or a time of flight sensing arrangement, although ranging information may alternatively be derived from analysis of images captured by the image sensor 304. Time of flight ranging is typically based on the use of short ultrasound pulses. The processor 302 is also coupled to one or more RF transceiver(s) 310, (for example a transceiver to support the rapid transmission of large image files, e.g. a Wi-Fi transceiver, and a narrow bandwidth transceiver (e.g. a ISM transceiver) for transmitting and receiving notifications and control messages, etc)a memory 312 that stores software to control the operations of the processor 302 and also stores images/video captured by the image sensor 304 and any audio captured by the microphone, a power supply unit 314 (which preferably includes one or more batteries to at least provide backup in the event of failure of any mains-fed power supply - although the power supply may instead be based on battery power rather than on mains-fed power), an audio interface 316, and preferably also a lighting arrangement 318 that at least provides infra-red illumination to enable images to be captured in the absence of significant levels of visible light. These elements are preferably also provided in image capture arrangements other than video doorbells, for example they may all be provided in a video camera of a security monitoring system such as that shown as camera 112 in Figures 1 and 2. Additionally, when the image capture arrangement of Figure 3 is a video doorbell it further comprises an actuator 320 - a “bell push” which may be in the form of a mechanical switch or in the form of an actuation device that detects user input using one or more capacitive or inductive sensors and that preferably does not rely on moving parts to detect an activation event. Again, when the image capture arrangement of Figure 3 is a video doorbell, the image capture arrangement of Figure 3 may further comprise a display 322 for the display of messages and visual feedback.

The elements shown in Figure 3 also largely correspond to those provided in an access device such as 110, although in this case a keypad 321 may be provided in place of, or in addition to, the actuator 320. Again, the ranging arrangement 308 is optional. Before explaining the operation of the image capture arrangement of Figure 3 we will develop the potential context of its use with reference to Figure 4. Figure 4 is a plan view of the premises 100 of Figure 1 showing the relationship of the house, and its private grounds, to a street 400 and a neighbouring house 101. The video doorbell 106 is mounted on the facade of the house 100 and therefore looks out over the path 108, driveway 109, the garden walls 400, and through the entranceway 402 to the street 404 and the footpaths 406 and 408 of the street - as indicated by the dashed lines 410. Similarly, the video camera 112 installed to monitor the car parking space 120 and adjacent portions of the garden also provides a view into the neighbouring garden as indicated generally by the dashed lines 412. It will be appreciated that these video cameras are installed to monitor a specific area of interest - which may also be referred to as the target zone. In the case of the video doorbell the monitored area comprises a specific area of interest 414 that includes the area between the front of the house and the walls 400 and the entrance gateway, including the path 108 and the driveway. But in addition, the monitored area includes those portions of the street and footpaths visible through the gateway even though these areas lie outside the specific area of interest. Likewise, although the video camera 112 is provided to monitor the parking space 120 and adjacent garden, in fact the monitored area extends to include a large portion of the neighbour’s property even though the neighbour’s property lies outside the specific area of interest 416. In the case of the video doorbell 106 in particular a distinction may be made between different zones within the specific area of interest 414. For example a near field zone may be defined to be within 1 to 1.5 metres of the video doorbell, the near field zone defining the zone where a visitor may stand while waiting for a rung doorbell to be answered, for example. The remainder of the specific area of interest 414 may constitute a single “far field” zone or be divided into two or more further zones - e.g. a medium field zone and a far field zone, and possibly additionally or alternatively also laterally divided into multiple regions.

Returning to Figure 3, at a high level the image capture arrangement 300 comprises a processor 302, and coupled to the processor 302: an image sensor 304 to capture images of a monitored area; a low power and/or thermal sensor 306, such as a thermal sensor, to detect human presence in the monitored area; and a range determining or range-detecting sensing arrangement 308, for example a radar arrangement. The sensor 306 is a presence sensor with a low power consumption - for example a lower power consumption than either the range detecting arrangement and/or the image sensor, and/or the processor when processing images. The sensor 306 may detect presence based on detecting motion or simply based on detection of presence without the need for motion. Such presence, with or without motion, may be detected by thermal sensors, and in general thermal sensors can be expected to have suitably low power consumption for this application. For example, the sensor 306 may be a PIR (passive infrared) sensor which detects motion by detecting changes in infrared radiation, or optionally a TMOS sensor which senses absolute temperature (rather than a differential temperature as a PIR does). Typically a high quality PIR sensor will consume less than O.OlmW, e.g. 0.005mW, while a TMOS sensor may consume no more than about 0.05mW. Depending on the desired implementation, an advantage of a TMOS sensor over a PIR sensor is that the TMOS sensor is generally harder for an intruder to fool because unlike a PIR sensor it does not rely on detecting motion - an advantage that stems from the fact that the TMOS sensor responds to absolute temperature. Also, over ranges likely to be of interest in the present application, TMOS sensors may not need to be provided with associated optics which is advantageous in terms of both cost and size, although optics may be provided if desired. While other types of presence sensing are available, for example based on the use of ultrasound or based on modulation of light from a powered light source, typically such presence sensing arrangements may consume too much power for this application - given that in many applications they may typically always be on. Hereafter, the terms low power sensor and thermal sensor may be used interchangeably. The low power sensor is preferably always active (“always on”), which at least in part explains why we want to use a sensing arrangement having a low power consumption. We want the low power sensor to have a power consumption lower, and preferably much lower, than that of the range detecting arrangement because we use signals from the low power sensor as a trigger to activate the range detecting arrangement that is usually off. Ranging signals from the range detecting arrangement are used to determine whether or not there is presence within the specific area of interest of a camera, so that the camera can be turned on only when there is presence within the camera’ s specific area of interest (unless it is decided to turn the camera - more generally image sensor - for some other reason, for example on instruction from the remote monitoring station 210).

The range detecting arrangement may, for example, comprise a radar arrangement, for example, a low power radar arrangement. A low power radar arrangement may have a power consumption of less than about 200mW, optionally less than about lOOmW, optionally less than about 50mW, optionally less than about 25mW. A radar arrangement can provide range information including distance and direction to a detected target. A radar arrangement may also provide information on the size of a detected target or targets, from which, for example, it may be possible to infer the presence of one or more humans (a person or people). Additionally or alternatively, the range detecting arrangement may comprise a so called time-of-flight sensor, for example, an ultrasonic time-of-flight sensor, or a light-based time-of-flight sensor comprising, for example, a laser or an LED. Such a sensor may provide distance and/or movement information, optionally information without direction. However, a time-of-flight sensor may be more power efficient and/or cost efficient than a radar arrangement.

The image capture arrangement may be so configured that in a rest state the image sensor 304 and the ranging arrangement are powered down, to save energy, and the processor may be configured in at least one operating mode to respond to a signal from the thermal sensor 306 that indicates human presence in the monitored area by powering up the ranging arrangement to determine the range of any human presence within the monitored area. The image sensor 304 has associated optics, not shown, for focusing light onto the image sensor, and may take the form of a camera module that includes dedicated processing and other features (e.g. memory) useful in a camera module, but equally the image sensor and optics may work in conjunction with the processor 302 and memory 312 to provide the functionality of a camera. In either format, the image capture arrangement is preferably configured to capture video as well as individual still images. Preferably there is also provided at least one microphone to capture audio, for example along with any video captures.

The processor may further be configured in the at least one operating mode to activate the image sensor only if information received from the ranging arrangement indicates presence within a target zone within the monitored area.

Any radar arrangement may conveniently operate in the 60Ghz band, where low power consumption system on chip solutions are available in very compact packages (for example, where the antennas are provided “on package” rather than as external components) from the likes of Infineon, Texas Instruments, and Socionext. Suitable devices include the BGT60LR11AIP device from Infineon, and the IWRL64322 from TI. Although these devices have surprisingly low power consumption, power savings can be made, and hence device battery life extended, by using a presence detector with a lower power consumption than the radar ( hence the use of the expression “low power sensor”) to monitor the monitored area while having the radar sleep. The lower power consumption presence detector may use for example a thermal sensor (IR sensing) such as a PIR sensor or a TMOS sensor to detect presence in the monitored area. TMOS sensors are particularly preferred because they sense absolute temperature rather than differential temperature as PIR sensors do, and this means that TMOS sensors are capable of detecting stationary objects as well as moving ones. TMOS sensors can also detect human-sized objects at up to 6 metres distance without the need to use optics, and up to 12 metres if suitable optics are used. Suitable TMOS sensors are available from ST Microelectronics - for example the STHS34PF80 device. Upon receiving a “presence” signal from the low power sensor 306, the processor can wake the ranging arrangement to obtain a ranging signal which can be used to determine whether the presence is within the desired “target zone”. If the presence is, for example too far away to be of interest, the processor doesn’t wake the image sensor , but if the presence is within the target zone the image sensor is activated and images of whoever or whatever is in the target zone are captured. If the ranging arrangement is also configured to determine a size of a detected target, the image sensor may only be woken if the determined size is indicative of the presence of one or more people. Depending upon the arrangement and the system settings, the captured images may be transmitted to a controller of a security monitoring system or they may be transmitted, directly or indirectly, to a system back end for analysis or assessment and/or transmitted to the householder or other designated person(s) for viewing on a device via, for example an installed app.

Defocusing energy received from ground level can help a PIR to distinguish between human and animal presence in that thermal energy is then received primarily from standing objects, rather than ground-level animals.

Image processing can use image recognition or pattern matching to detect human forms in the image.

From considering Figure 4 it will be appreciated that the image sensor may (and generally will) have a larger field of view than the low power sensor 306 and/or the ranging arrangement 308. Often we will not be interested in what the image sensor can see at a great distance, and we can use the ranging capabilities of the ranging arrangement 308 to help us ignore activity and /or presence at distances/locations outside of our specific area of interest. Radar and time of flight ranging can also help to overcome an overly sensitive low power (e.g. thermal) sensor whose range exceeds the maximum depth of the specific area of interest. The range of a radar/ time of flight detector is at least in part dictated by the launch power of the pulses used for ranging, and in general we may be working with specific area of interest whose depth (and breadth) is such that we don’t have to work at the ranging arrangement’s (e.g. radar’s) maximum possible launch power and this is very helpful in terms of extending battery life (when that is a relevant consideration). This possibility to trim the range of the ranging arrangement (e.g. radar) by controlling power doesn’t really exist for the image sensor or low power sensor, and although their fields of view can be adjusted using optical masks, lenses and filters these are much harder to manage in a practical setting where details of the adjustments required are likely to be unique to each particular site and mounting location.

Whereas a PIR sensor generally has a wide field of view, a Time-of-flight (TOF) sensor could optionally have a narrower field of view. This could go partway to overcoming the limitation that some TOF sensors only measure distance without direction. The field of view could be limited mechanically, by an (ultrasonic) baffle or lens or collimator, or it could be a characteristic of the transducer used in the TOF module. Reducing the field of view of the TOF sensor (where used) can go some way to reducing risk of false positives. If a PIR sensor is used it is likely to detect many motion detection events which are false positives for the device. A TOF sensor with a limited FOV can reduce the incidence of false positives, and reduce the number of times the camera needs to be powered up.

Figure 5 illustrates schematically operation of the detection system of an image capture arrangement, such as that shown in Figure 3 in which ranging is performed by image analysis rather than by using a dedicated ranging arrangement such as a radar or time of flight detection system. Figure 5 also assumes that the low power sensor 306 is a thermal sensor in the form of a PIR detection arrangement (comprising for example one or two PIR detectors). Figure 5 begins at 500 with only the PIR detector(s) active and the processor 302, here an MCU (there may be multiple MCUs or other processors in the image capture apparatus), asleep and the image sensor 304 turned off. As long as the PIR 306 detects no motion 501, it provides no output signal to the processor 302, but if it detects motion 502 or is exposed to changes in light intensity (and/or vibration) 503, the PIR provides an output signal which, if it exceeds a predetermined threshold is effective to wake 505 the processor 302. At 516 the MCU processes the PIR signal and at 510 determines whether the PIR signal could be motion from a human. If the processor 302 determines 512 that the PIR signal does not represent human motion, the processor goes back to sleep and the process returns to step 500. If the processor 302 determines 514 that the PIR signal does represent human motion, the processor wakes the image sensor (camera) 304 which starts to capture video or a sequence of still images, preferably at least 3 in number. At 516 the images (still or video) are supplied to an object detection process 520 which determines the nature of the object responsible for the detected motion. The object detection process will typically be performed using an Al approach, for example based on machine learning, optionally a suitably trained neural network.

If the object detection process determines that the captured images contain no human and no other object of interest, the camera 304 and the processor 302 go back to sleep, the captured images discarded, and the process returns to step 500. But if the object detection process determines that the captured images contains a human 523 or another object of interest 524, the process advances to step 525 in which the captured images are subjected to computer vision processing to determine the range of the human or other object of interest. For example, this processing may be based on the location of the object in the image, the size (based on height, width, area, pixel count, etc.) of the object in the image (e.g. compared to known objects in the image), and again an Al-based approach is favoured as a means of delivering accurate results in a timely fashion. If the computer vision process 525 determines 526 that no object of interest is within the target area, the captured images are discarded, the processor and camera are powered down, and the process returns to step 500. If the computer vision process 525 determines that a human 528 or another object of interest 529 is within the target area, the captured images are processed 530 as necessary to obscure such background image portions as is required for GDPR (or equivalent) compliance. This may be achieved by masking the entire (or substantially the entire) background area, or it may involve using a face location approach and then the processing of background image portions that contain faces to remove sufficient information for GDPR (or the like) compliance. Background masking is conveniently based on differential labelling of background areas and areas of interest (e.g. target area(s)) during an initial setup process.

The captured images, edited as necessary for GDPR (or equivalent) compliance or for any other purpose are then supplied to the transceiver 310 by means of which they may be supplied to the customer/owner/occupier via, for example, transmission to a system back end - typically via the controller 208 of the security monitoring system, but optionally via another gateway (e.g. a Wi-Fi gateway) or via a PLMN, and thence to a user device 220. The images may be pushed directly to the customer device, or they may be held 533 in a backend store (e.g. in cloud storage) for later review by the customer. Additionally or alternatively, the captured images, edited as necessary, may depending upon the arm state of the security monitoring system be supplied to the remote monitoring centre 210 (also referred to as an alarm receiving centre ARC). The images are passed via the RF transceiver 310 to the control unit 208 of the security monitoring system which takes account 535 of the arm state of the system . The control unit 208 may, if the system is armed, either supply the images to the ARC 210 directly (as a pre-alarm notification) or may supply the images to an image store accessible to the ARC 210 from where ARC operatives can recover the stored images for review in connection with an incident or alarm event. Pre-alarm notification 536 is an awareness state used by the alarm (security monitoring) system for heightened sensitivity that some kind of intrusion event may be imminent, based on sensor signals that themselves are insufficient to trigger an alarm.

Pre-incident 537 is a pool of information (e.g. captured images and video) recorded (shortly) before the alarm event, that the surveillance team (in the ARC 210) can access should an alarm be triggered. It can provide additional useful information about what led up to the alarm.

After the generation and supply of the edited images at steps 532 and 534 the process ends, with the camera and processor being powered down, when the PIR ceases to report motion (502, 503), although the system may also make use of time outs as appropriate.

Figure 6 shows a detail (steps 500 to 506) of Figure 5 representing the initial wake up detection subsystem. The objective of this subsystem is to optimize the threshold with respect maximizing the number of wake ups based on the presence of a human object of interest, while minimizing the number of wake ups based on any motion outside the target area. In support of these objectives it is helpful to: provide temperature compensation to minimize unnecessary wakeups during cold periods e.g. a cold season (e.g. knowledge of the ambient temperature can be used to judge how great the expected temperature difference should be between a person and the ambient temperature. In turn, this can be used to adjust the PIR signal threshold before waking the MCU. During warm conditions, the temperature difference will be small, because body temperature will be only slightly above ambient, and a low threshold set. During cold weather, such a low threshold may result in false positives from detecting motion from objects that are only slight above ambient and not at body temperature. Increasing the threshold in cold weather can reduce these false positives); provide suitable IR-filtering for the PIR to reduce the incidence of false wake ups due to sunlight and in particular due to the PIR “seeing” the sun directly; configure the PIR to restrict the monitored (observed) area, for example by using masks and/or suitable lens configurations, to reduce false wake ups due to motion outside the target area.

The PIR setup can be optimised to help achieve the last two goals, for example by choosing to use one or two PIRs, by appropriate selection of optics - e.g. using a mirror(s) and/or Fresnel lens(es), and by suitable masking and/or other aspects of the optics design it is possible to configure the observed area appropriately.

Figure 7 shows a detail (steps 510 to 516) of Figure 5 representing the motion detection subsystem. Here the objective is to optimize the algorithm (which will typically be based on some kind of machine learning (ML) approach) applied by the processor so as to: Maximize human motion detection;

Minimize False detections due to light/vibrations;

Minimize False detections due to motion from the environment (bushes, trees, wind, etc.); Minimize False detections due to motion from other (animals, etc.).

We also want this processing to provide decisions quickly, so that we don’t miss capturing important events with the camera. Typically we will use a real time ML-model, e.g., a decision tree algorithm.

Figure 8 shows a detail (steps 515 to 524) of Figure 5 representing the human object detection subsystem. The objective for this subsystem is to optimize Image-ML with respect to maximizing true human object detection while minimizing false human object detection.

Figure 9 shows a detail (steps 523to 529) of Figure 5 representing range estimation by computer vision.

Figure 10 corresponds to Figure 5 but represents the use of a different type of low power sensor 306. In this case the sensor, which is again a thermal sensor, is a TMOS sensor. The process starts at 1000 with the TMOS sensor active but with the processor and image sensor (camera) powered down. Thereafter, the process of Figure 10 mirrors that of Figure 5 with the role of the PIR replaced by the TMOS sensor.

Figure 11 corresponds generally to Figure 5 but here, instead of using image analysis to determine the range of objects the image capture device 300 uses a dedicated ranging system, here in the form of a time of flight detection system. Once again, the low power sensor 306 is a PIR (which may include one or more PIR detectors). As before, if the PIR Threshold is reached 1105 the processor (MCU) 302 is woken but now the processor in turn wakes 1107 the ranging system 308 which is here a time of flight system that launches ranging pulses and determines range based on the time taken for the ranging pulses to return. The time of flight ranging system ranges to determine whether there is motion (or presence) in the target zone. If there isn’t 1108 the processor and the time of flight system go back to sleep and the system reverts to step 1100.

If there is determined to be motion within the target area, the time of flight data are provided by the time of flight system (which will include some kind of processor or MCU) to the processor at step 1109. Based on the signals from the PIR and the data from the time of flight system, the processor determines whether the detected motion could be from a human. If the processor determines that the motion could be from a human, and if the time of flight data show that there is motion within the target area, the processor wakes the camera 1115 which then starts to capture video or a sequence of images (at least 2, preferably at least 3). It will be appreciated that the decision node that links 1111 and 1114 may be deleted - because the processor 302 already has all the relevant information, so that this processing may already have been done. At 1116 the video/images are processed to determine what object was responsible for the detected motion (as with step 520 of Figure 5). If an object of interest is determined to be present, the process continues to step 1130 (which mirrors step 530 of Figure 5) with the processing of the images to obscure or eliminate either all out of bounds image features or only those that correspond to recognisable human features such as faces, or some combination of the two.

Figure 12 corresponds closely with Figure 11 but here the low power sensor (primary sensor) is a TMOS sensor rather than a PIR, and hence the system and method have similarities with those of Figure 10 but using device -based ranging (i.e. using dedicated hardware) rather than computer vision based ranging.

Figure 13 shows the use of radar to provide ranging information that is used in determining whether to wake the camera. Generally the system and method are based on those of Figure 5, with the first sensor again being a PIR arrangement (of one or more PIR devices), but without of course the use of computer vision to generate ranging information. As before, if the PIR arrangement provides an above-threshold signal the processor wakes 1305. By analysing the signals provided by the PIR the processor is able to determine whether the detected motion could be that of a human. If it is, the processor wakes the radar at 1311 , but if it isn’t the processor shuts down again leaving just the PIR arrangement active. If the radar is activated it emits ranging signals and analyses the timing and properties of the signals that come back to determine whether there is an object within the target zone, and optionally the size of any object(s) detected. The processor 302, based on the information derived from the PIR signals and from the radar determines 1313 whether the detected motion could be human motion from within the target zone. If the determination is positive, the processor wakes the camera which then begins 1315 to capture video or a sequence of at least two (preferably at least 3) images. If the determination is negative, the radar is turned off and the processor powers down, leaving just the PIR arrangement active. Once captured, the images are processed 1320 to determine whether an object of interest is within the target area - in effect determining what object was responsible for the detected motion. If no human or other objects of interest are detected within the target zone, the images are deleted and the camera, radar and processor powered down, leaving just the PIR arrangement active. If it is determined that an object of interest is present in the target zone, the captured images are if required processed as previously described (for GDPR - or the like compliance). The handling of the images emerging from step 1330 is as previously described.

Figure 14 corresponds to Figure 13 but with the use of a TMOS sensor in place of the PIR arrangement. In all other respects the system and method illustrated in Figure 14 corresponds to that of Figure 13.

With respect to the systems of Figures 13 and 14, the reason that we do not immediately wake the radar along with the MCU based on the first sensor detecting movement or presence, as we do with the systems shown in Figures 11 and 12 where range estimation is by time of flight, is that radar ranging systems tend to be much more power hungry than time of flight systems. Given that we may be dealing with a device that is relying on battery power - either because it is solely battery powered or because it is running on a backup battery power supply, the typical extra power consumption of radar ranging systems makes it preferable to have the MCU (more generally processor) determine whether there is a relevant presence, based on movement for example, before powering up the radar rather than starting the radar along with the MCU (processor). Of course if there is no battery power supply constraint, then the time of flight example could be implemented using radar instead of time of flight, rather than using the arrangements of Figures 13 and 14.

Figure 15 corresponds to Figure 5, differing only in that in this variant of the method the computer vision range estimation is no longer required to detect motion inside the target area. This relaxes the requirements for the processing algorithm - although of course the use of a PIR arrangement, and the early stages of processing still require there to have been movement. Figure 16 is a timeline diagram illustrating activation of the image sensor of an image capture arrangement, such as that shown in Figure 3. In a rest state the radar arrangement 308 and image sensor 304 are powered down (“sleeping”), but the low power sensor 306 is active. Upon sensing a presence in the monitored area the low power sensor provides 1600 a “presence” signal to the processor 302. If the signal from the low power sensor is greater than a threshold level, the processor is powered up and the processor then determines 1601 whether the signals from the low power sensor indicate human movement. If they don’t the processor powers down and the system returns to the rest state. But if the signals do indicate the existence of human movement, the processor 302 activates 1602 the radar arrangement 308. At 1604 the radar arrangement 308 transmits one or more ranging signals. At 1606 the radar arrangement 308 receives reflected ranging signals and processes these to determine the range (distance) to reflecting objects, and optionally the size of any objects. At 1608 the radar arrangement provides the results of ranging determinations to the processor 302, and will continue to do so until instructed by the processor to power down (or until the processor powers down the radar arrangement 308).

At 1610 the processor compares the received ranging signals with pre-stored ranges corresponding to the specific area of interest (e.g. 414 or 416 of Figure 4). If the ranging signals indicate presence only beyond the specific area of interest, the processor does not activate the image sensor, but either continues to monitor received ranging signals as these arrive or powers down the radar and itself to return to the rest state. It can be interesting to receive a series of ranging signals that indicate an ever decreasing range, as this may indicate that someone is approaching the target zone, and it can be interesting to be alerted to any detection of loitering in the monitored area - and this is discussed later with reference to Figure 23. In the event that the processor does determine that a received ranging signal indicates presence within the specific area of interest, the processor at 1612 activates the image sensor 304 (or corresponding camera, if the image sensor is provided as part of a self-contained camera).

At step 1614 the image sensor or camera 304 starts to capture images and at 1616 these are supplied to the processor 302. At step 1618 the processor processes the image to determine what object caused the motion detection event. If the images do not contain any object of interest (human or otherwise), the images are discarded and the image sensor, radar and processor are powered down. If the images do contain an object of interest, the processor at step 1620 proceeds to segment, mask or obfuscate the images as necessary for GDPR (or the like) compliance. This may be done based on pre-stored masks corresponding to the boundary of the premises - so that, for example, parts of the image captured by the camera are labelled as corresponding to public property - e.g. the street, or to private property not belonging to the protected premises, e.g. a neighbouring property and the image portions corresponding to the public or third party parts are masked or obfuscated before the images leave the image capture device. Alternatively, the processor may use a facial recognition algorithm to identify image portions that contain faces and then selectively mask/ obfuscate or detail-suppress those image portions that contain faces that are determined not to be in the grounds of the protected premises. This may be done using the size of an image portion that has an associated face and/or based on a minimum face size corresponding to an adult “in range” (i.e. within the target area) face. Account may additionally or alternatively be taken of whether an image portion recognised as a person also includes one or more feet - since the presence of feet in an image portion is likely to indicate that the person is rather remote from the camera. It will be appreciated that these observations on image processing apply equally to systems in which ranging is performed by image processing rather than using dedicated ranging hardware.

At step 1622 the processor supplies these images, pre-processed for GDPR (or the like) compliance, to the transceiver 310. At step 1624 the transceiver 310 transmits the supplied images to the control unit of the security monitoring system which may in turn supply these images to a remote monitoring station 210, if the security monitoring system is armed, and/or (optionally whether or not the system is armed) to a system back end or cloud storage for access by the owner/occupier of the protected premises. The processor 302 may determine to continue to power the image sensor/camera even after the radar arrangement ceases to provide signals indicating presence within the specific area of interest of the camera/image sensor - for example, in the event of the security monitoring system having been triggered by a break in (e.g. detected by a door/window sensor, and/or internal presence/motion sensors having been triggered in an armed away mode) the processor 302 may supply images from the image sensor/camera to a remote monitoring station (and/or pushed to a user device 220) and may continue to want images captured by the camera in order to provide evidence for insurance/police investigations, such as for example to capture images of any vehicles used by any intruder, evidence of stolen goods being loaded onto a vehicle, and information about whether the intruders turned left or right on leaving the grounds. The video images may be streamed by the processor/transceiver until, for example, the remote monitoring station instructs the processor to cease the transmission of images. The control unit 208 of the security monitoring system may be configured to store images/video captured by cameras/image sensors of the security monitoring system, and images/video may be stored on the control unit instead of or in addition to these being transmitted to an off-site store and/or remote monitoring station. It is however desirable for captured images/video to be transmitted to a remote receiver, for storage and/or analysis, to guard against the destruction/removal of the control unit 208. As and when necessary, the processor 302 of the image capture arrangement 300 instructs the transceiver 310, the image sensor 304, and the radar arrangement 308 to power down or directly powers these down at steps 1626, 1628, and 1630, respectively.

Figure 17 corresponds to Figure 16 but illustrates the use of a time of flight ranging system rather than a radar system. In a rest state the ranging arrangement 308 and the image sensor 304 are powered down (“sleeping”), but the low power sensor 306 is active. Upon sensing a presence in the monitored area the low power sensor provides 1700 a “presence” signal to the processor 302. If the signal from the low power sensor is greater than a threshold level, the processor 302 is powered up and the processor may also power up 1702 the time of flight system. The time of flight system then starts its ranging operation at 1704. Data from the ranging operation are then provided to the processor at 1706.

The processor then determines 1708 whether the signals from the low power sensor and the time of flight system indicate human movement within the target area. If they don’t the processor and the time of flight system are powered down and the system returns to the rest state. But if the signals do indicate the existence of human movement, the processor 302 activates 1710 the image sensor 304 (or corresponding camera, if the image sensor is provided as part of a self-contained camera).

At step 1712 the image sensor or camera 304 starts to capture images and at 1714 these are supplied to the processor 302. At step 1716 the processor processes the image to determine what object caused the motion detection event. If the images do not contain any object of interest (human or otherwise), the images are discarded and the image sensor, radar and processor are powered down. If the images do contain an object of interest, the processor at step 1718 proceeds to segment, mask or obfuscate the images as necessary for GDPR (or the like compliance). This may be done as previously described.

At step 1720 the processor supplies these images, pre-processed for GDPR (or the like) compliance, to the transceiver 310. At step 1722 the transceiver 310 transmits the supplied images to the control unit of the security monitoring system which may in turn supply these images to a remote monitoring station 210, if the security monitoring system is armed, and/or to a system back end or cloud storage for access by the owner/occupier of the protected premises (optionally whether or not the system is armed). The processor 302 may determine to continue to power the image sensor/camera even after the ranging arrangement ceases to provide signals indicating presence within the specific area of interest of the camera/image sensor - for example, in the event of the security monitoring system having been triggered by a break in (e.g. detected by a door/window sensor, and/or internal presence/motion sensors having been triggered in an armed away mode) the processor 302 may supply images from the image sensor/camera to a remote monitoring station (and/or pushed to a user device 220) and may continue to want images captured by the camera in order to provide evidence for insurance/police investigations, such as for example to capture images of any vehicles used by any intruder, evidence of stolen goods being loaded onto a vehicle, and information about whether the intruders turned left or right on leaving the grounds. The video images may be streamed by the processor/transceiver until, for example, the remote monitoring station instructs the processor to cease the transmission of images. The control unit 208 of the security monitoring system may be configured to store images/video captured by cameras/image sensors of the security monitoring system, and images/video may be stored on the control unit instead of or in addition to these being transmitted to an off-site store and/or remote monitoring station. It is however desirable for captured images/video to be transmitted to a remote receiver, for storage and/or analysis, to guard against the destruction/removal of the control unit 208.

As and when necessary, the processor 302 of the image capture arrangement 300 instructs the transceiver 310, the image sensor 304, and the ranging arrangement 308 to power down or directly powers these down at steps 1724, 1726, and 1728, respectively.

It will be appreciated that the behaviour of an image capture arrangement according to aspects of the invention may depend upon an armed state of the security monitoring system - the status (e.g. “armed” or “disarmed”, with the option to discriminate between an “armed at home” and an “armed away” state, the former merely providing a secured perimeter - and typically used when the premises are occupied and the occupants want the comfort of a monitored perimeter, and the latter being used when the premises are unoccupied so that the detection of presence within the premises constitutes an alarm event) of the security monitoring system may be provided to an image capture arrangement so that the processor 302 of the image capture arrangement can adjust its behaviour according to the armed state. The processor of the security monitoring system may be configured to transmit the system status to image capture arrangements associated with the system, i.e. to security cameras of the system and to any video doorbells of the protected premises. The report of system status may be either “armed” or “disarmed” with no distinction being made between an “armed away” state and an “armed at home” state, although in alternative implementations the reports of system status may specify whether an armed state is “armed at home” or “armed away”. The behaviour of an image capture arrangement according to aspects of the invention may additionally depend upon whether the image capture arrangement is a video doorbell or a security camera for the security monitoring system (an alarm system) - in the latter case the processor of the image capture arrangement may be configured not to activate the radar arrangement or the image processor/camera, when the security monitoring system is disarmed, based solely on the low power sensor detecting presence - although the image processor/camera may be activated, and optionally also the radar arrangement, upon instruction from a remote monitoring centre 210 or from a user via a user device 220. Whereas, if the image capture arrangement is a video doorbell its processor 302 may be configured to activate the radar arrangement, and based on the ranging signals received also activate the image sensor/camera, irrespective of the armed state of the security monitoring system.

The processor of the image capture arrangement may be trained so as to acquire the specific area of interest of each of the video cameras on their installation as part of a training operation in which an installer (or householder) works with an assistant to position a person at locations beyond the specific area of interest, at the limit of the specific area of interest, and preferably also (just) within the specific area of interest.

At least when the image capture device is a video doorbell, its processor 302 may be configured to take account of the length of time that a presence is determined to be within the specific area of interest. For example, the controller may be configured not to act on a radar ranging signal confirming presence within the specific area of interest, and optionally in a near field zone of the specific area of interest until the in-range presence has persisted for a predetermined period. A suitable minimum of the predetermined period may be at least Is, optionally at least 2s, optionally at least 3s, optionally at least 4s, optionally at least 5s, optionally at least 6s, optionally at least 7s, optionally at least 8 seconds, optionally at least 9 seconds, optionally 2 to 10 seconds. Adopting such an approach may help to avoid triggering the camera as a result of someone walking across the specific area of interest, and thus also reduce the problem of inadvertent captures that may be subject to restrictions under the GDPR or the like.

In image capture arrangements according to aspects of the invention one option would be only to detect when the alarm is armed. This would save battery power for when you really want to detect suspicious of activity around the device.

Another option would be to detect when also disarmed. This can allow the device to accumulate “review the day” images that the user may be interested to see later to review what happened while she was away. The device may be surveying quite a lot of the user’ s own property, in which case we are fully entitled to capture images any time we like, without requiring suspicious activity to justify it.

It will be appreciated that the radar arrangement 308 may be configured not only to provide range information in terms of a simple distance between a detected object and the radar 308, but it may also be able to provide information about lateral offset from an axis of view - making it possible to discriminate between different lateral positions at a certain distance from the radar and hence enabling the specific area of interest of an image capture device to be shaped or configured to exclude portions where presence is not of interest. For example, considering the arrangement shown in Figures 1 and 4 we might define the specific area of interest as being (largely) confined to the path 108 because it may be decided that visitors who are going to ring the doorbell will always approach the doorbell 106 use the path 108. We may similarly decide to laterally constrain the specific area of interest of any image capture device according to the invention, taking advantage of the ability of the radar arrangement to distinguish between lateral offsets in order to exclude areas that we do not want to monitor - for example to exclude areas from reasons of data privacy (e.g. for compliance with GDPR or the like).

It will be appreciated that one of the constraints under which image capture arrangements now operate, in most parts of the world, is a restriction based on privacy of personal data including for example image data - exemplified by the European Union’s General Data Protection Regulation (GDPR). This constraint manifests itself as restrictions on the capture and handling of images of persons. This represents a challenge for, for example, video doorbell installations that can capture images of a street or other outdoor public space, other private properties, or communal areas of multi-occupancy buildings such as condominiums, and likewise for security cameras of alarm installations that have the same kinds of outlook. Embodiments of the present invention provide a means to address this constraint by providing an image capture arrangement in which the processor is further configured to provide information received from the radar arrangement as an input to an image processing operation performed on images captured by the image sensor. The image processing arrangement may be used to process the images based at least in part on ranging information from the radar arrangement to facilitate processing of captured images for compliance with the GDPR or the like. For example, the image processing operation may involve obscuring image portions that include human features of any human determined to be more than a threshold distance from the image capture arrangement. The processor 302 of the image capture arrangement, which may be an MCU and may contain one or a plurality of processors, may be configured to perform the image processing operation.

Figure 18 illustrates schematically an example of segmenting the image provided by an image capture device according to aspects of the invention. In the example shown we see typical images provided by a video doorbell installation according to an aspect of the invention. In 18A we see the doorbell camera’ s view of the grounds of the protected premises, which includes a path 1810, flanked on either side by a section of lawn 1812. The boundary of the property is marked by the transverse dashed line, beyond which can be seen a view of a street 1814 with a pavement 1816, on the camera’s side of the street, abutting the boundary of the property. Also visible is another pavement 1818 on the opposite side of the street, beyond which houses 1820 are visible. Also visible in this Figure is a pedestrian 1822 walking along the street on the nearside pavement 1816, just beyond the property boundary. If the video doorbell installation is to be GDPR compliant, we need to take care not to supply images from the camera that include sufficient information to enable passers-by such as pedestrian 1822 to be identified. We can potentially achieve this goal by segmenting the view from the camera so that the part of the image that represents geography that is “out of bounds” (which can be considered to be the background) is identified or labelled so that the relevant pixels can be processed to obfuscate information that we want to suppress. The pixels corresponding to the “out of bounds” geography or background are enclosed in the rectangle of dashed lines.

Figure 18B corresponds to Figure 18A but shows the presence of a person, a visitor, 1824 adjacent the property. The proximity of the visitor can be established using dedicated ranging hardware and/or by image processing - it will be seen that the image portion containing pixels representing the visitor is much larger than the portion representing the passer-by 1822. But it can also be seen that the passer-by 1822 is still visible in Figure B. If we simply obfuscate the out of bounds rectangle, we will not be able to identify the visitor, so we must adopt a different approach. We can for example use a recognition algorithm to identify those image portions corresponding to anyone located within the grounds of the protected premises, and then protect the pixel values of the relevant pixels while obfuscating the remaining pixels of the background portion. We can of course also use the person-identifying algorithm to identify any people in the background of the image, the out of bound parts of the image, and then selectively obfuscate only the pixels corresponding to those background people, rather than obfuscating the whole of the background.

An alternative approach would be to use “bounding boxes” that we define about any visitor(s) and then we can obfuscate the remainder of the image or obfuscate only the background and then only the background that lies outside the bounding box(es) of any visitors, and this approach will shortly be described with reference to Figures 21 and 22 .

Figure 18C shows a typical image where a visitor is present, proximate the protected premises, but there are no other people in sight. Under such a situation if the processor can determine, for example using a face recognition algorithm, that there are no people in the background, then the processor can determine not to obfuscate the background - as reducing the amount of image processing done will typically reduce the amount of energy consumed - and that is helpful particularly if the image capture device is running on battery power.

Figure 19 shows a potential view from the video doorbell 106 of Figures 1 to 4, showing pedestrians 600 on the pavement 406 of the street 404 immediately outside the premises, and also shows that pedestrians 602 on the pavement 408 on the opposite side of the street 404 are also visible. Also shown in the figure is a bus or coach 604 within which it might on occasion be possible to see the faces of the driver and passengers. Fortunately, while in the figure the pedestrian 600, 602, and the coach 604 are within the field of view of the camera of the video doorbell, that is they are within the monitored area, they are all outside the target zone (specific area of interest) of the monitored area and therefore even if the presence of the pedestrians 600 at the gateway is detected by the thermal sensor ranging information from the radar arrangement will show that they are out of range and hence the processor 302 will not activate the camera of the video doorbell.

Figure 20 corresponds to Figure 19 but shows the presence of a mother and child 700 on the path, adjacent the drive, within the grounds of the house. The path is within the target zone, so that not only will the mother and child 700 be detected by the low power sensor as they walk up the drive and onto the path, but also ranging information from the radar will show that they are within range, that is within the target zone of the monitored area. The processor 302 of the video doorbell may therefore activate the camera of the device. Images captured by the camera of the video doorbell may now give rise to a GDPR problem - in that the captured images not only show the faces of the mother and child 700, who may be visiting the premises 100 (which should not be a problem under the GDPR) but also shows the faces of the pedestrians 600 and 602 (and potentially of passengers on the coach 604) and that is a problem. But given that we have ranging information to accompany the captured image(s) we can use that ranging information to identify the people 700 who are within the target zone. Having identified all the relevant people, here the mother and child 700, we can identify all the other faces in the image and, determining that they are all out of range (since they are not within range according to the ranging information) process the portions of the image that correspond to those other faces to render them illegible - for example by setting the chrominance and luminance values of all the relevant pixels each to a standard value, optionally enlarging the area of each captured face so that the regions of normalised chrominance and luminance values expand by a “halo” amount to conceal details such as hair and face shape. Other processing operations could of course be performed to remove any pictorial details that could help identify the persons whose likenesses were captured - the precise method of obliterating the features of the people whose images were captured is not critical provided that it is not subsequently possible to restore the lost information from the information left behind. The recognition of image portions as containing human faces can be performed by one of the many readily available algorithms for facial recognition.

Depending upon the configuration of the video doorbell, the mother and child 700 in

Figure 20 may be determined to be outside a defined near field and hence the processor 302 of the video doorbell may not yet consider it appropriate to activate the image sensor/camera. But in Figure 21 we see the mother and child much closer to the house 100 and at this stage the processor may determine that they are within the defined near field zone and hence activate the camera of the video doorbell. Here the image processing task is similar to that described with reference to Figure 20, although here care needs to be taken when obliterating the features of the face of the balloon-holding child (one of the people 602) at the gateway not also to obliterate part of the features of the mother in the visitors group 700. The processor responsible for processing the images may recognise the existence of this problem by determining that some of the pixel coordinates of the mother’ s face in the image are very similar to some of the pixel coordinates of at least one of the other faces in the image. A possible solution to the problem may be to identify the image portions corresponding to the in range people 700 and then to define a bounding box around the relevant image portion(s) as shown as 900 in Figure 22. The processor processing the image can then retain the image detail within the bounding box and de-detail the image portion(s) not contained within a bounding box - as before, there are many ways of de-detailing but generally these involve removing relevant pixel detail, for example by unifying the pixel values, e.g. by “greying out”.

Figure 22 also provides a basis for describing the application of another aspect of the invention which is the application of different types and/or intensities of obscuration to different portions of a single image (or to each of a sequence of images). The image shown in Figure 22 includes quite an extensive portion (largely) corresponding to private grounds of the property - being that portion below the dashed line A-A, and another significant portion (above the dashed line) that contains most of the out of bounds (in this case public) land that is observed by the image capture arrangement. It will be noted that the dotted line A-A has been positioned just above the bounding box 900 that has been defined around the visitors 700. Although the positioning of the line A-A means that the upper parts of the boundary wall of the property is above the line, this is unlikely to be a problem because all of the ground level of the property to the front of the house is included in what can be treated as the private part of the image.

The occupier/owner of the property may well be interested in monitoring the image part that corresponds to the grounds of the property - that is, the part of the image below line A-A, but still be more interested in seeing who is visiting. It may therefore be beneficial to substantially obfuscate the image area above the line A-A, possibly by completely blanking it out, and merely slightly dim (e.g. 10 to 20%) or blur (e.g. 10 to 20%, e.g. applying a bokeh-like effect) the area beneath the line A-A that is outside the bounding box 900 - thereby emphasising the region within the bounding box 900. Moreover, for GDPR reasons it would be reasonable to make the obfuscation of the upper part of the image, above line A-A, non-reversible, but it could be helpful to be able to reverse the slight obfuscation of the obscured region beneath the line A- A. It will be appreciated that the line A-A could be defined lower down the image, so that its position coincides with the line of the boundary of the private property (that is, in line with the visible bases of the two boundary walls), thereby excluding all of the public space that begins with the pavement just outside the property. When there is no-one on the property to the front of the premises this would mean that the image portion corresponding to the public part could be obscured in some way, while the image portion corresponding to the private part remains unobscured so that a user can reassure themselves that there are no intruders in this part. When a visitor appears, the use of a bounding box approach in which image parts within the bounding box(es) are not (or largely not) obscured would enable the features of any visitor to be seen while largely maintaining the privacy of people within the public space beyond the private grounds.

Blurring can be a useful and attractive technique as it gives a viewer some spatial awareness in the image but enables the removal of personally identifiable information. Although we can always just “black out” the area surrounding the person that we want to show on screen, doing this reduces the sense of where the person is (or if someone is hiding next to the door to break in etc., etc).

Alternative approaches to obfuscating image portions or areas of an image include: Blurring (potentially reversible), an example of which is a bokeh effect; “painting over” (irreversible); and Watermarking (irreversible) or semi-transparent watermarking. It can be useful to use a linear algorithm to increase complexity. Example algorithms are Box blur and Gaussian blur

Some other algorithm like “mixing and mirroring elements” I “twirls” I “waves” (distorting the image) (likely irreversible). As previously mentioned under different circumstances, and particularly depending upon the reason for obscuring image portions, we may variously be interested in obfuscation mechanisms that are substantially non-reversible, non- recoverable, and reversible. It will be appreciated too that it may be useful to apply different obfuscation techniques to different portions of an image. For example we may choose to use what is in effect a broad brush approach (which imposes a low processing burden but isn’t particularly accurate) to areas where there is no risk of overlapping with an image portion that we want to leave substantially unobscured, such as the face of a visitor, e.g. background portions that are remote from the face of the visitor, but use a different and more precise approach (despite a potentially increased processing burden) on background areas adjacent a visitor’s face.

Range information may be determined at least partly from processing the image. For example, range may be calculated by measuring the position of a person’s feet within the image. A person’ s feet are characteristics that may be recognised by an image processing algorithm. The position of a person’s feet can provide an indication of the distance between the camera and the person independently of the height the person.

The image sensing arrangement and/or the processing arrangement may be operated in a low-power mode for performing certain operations, including certain image generating and image processing operations. For example, in a low power mode, the camera may be configured to generate a monochromatic image, or a single colour channel image, e.g. R, or G or B. Additionally or alternatively, in a low power arrangement, the image processing arrangement may be configured to process monochromatic image information, or single colour channel information, e.g. R, or G or B. Such a technique may provide of simplified processing compared to processing full-colour image information, and facilitate lower power consumption.

Figure 23 illustrates schematically different behaviours which may be detected by a thermal/low power detector and possibly also by a ranging arrangement such as a radar or time of flight system. In the images A and B, 231 represents the outline (in plan) of the image field as observed by the low power/thermal sensor 306 (and possibly also as observed by any hardware ranging solution), which in this case is assumed to be part of an image capture arrangement in the form of a video doorbell that is mounted adjacent a door 238. 233, 235, and 237 are positions from which a pedestrian enters the field of view, and 234, 236, and 238 represent positions to which the pedestrian transitions.

In image A, at i) a person walks across the image field without stopping, that is they enter and exit the field, moving at a steady pace from position 233 to 234. Such behaviour in a pedestrian walking across a public space, such as the pavements 406, 1816, that is observed by the image capture arrangement, is unsuspicious and in general we are not interested in capturing such behaviour. But in the second, ii), the person enters the image field from position 235 and walks straight across the image field and then stops within it, remaining in the image field at position 236 for a prolonged period (several seconds at least). Such behaviour is suspicious (potentially that of an opportunist thief looking for things to steal, or a more professional thief “casing the joint”), and consequently even though the person is in a public space we may still want to capture such events.

Image B i) is the same as image A i) but here we are also considering the trajectory of the person as they walk across the field of view. In this case the pedestrian walks across the field of view but does not approach the image capture device. This again is unthreatening and unsuspicious behaviour which we are generally not interested in capturing. Conversely, in B ii) the person walks from position 237 to position 238, approaching both the door 232 and the image capture device, coming to a stop immediately in front of the door 232. This latter pattern of behaviour is something that we want to capture. Indeed, in general we are interested to capture images (or video) of people who approach (in the sense of sidling, walking, or running towards) the image capture device (whether a video doorbell or a security camera of a security monitoring system) and of course we can use our ranging information - whether provided by a hardware ranging system (such as radar or a time of flight system), or from image analysis, to determine whether a person is approaching the image capture device. Thus, the processing arrangement or processor may be further configured (e.g. programmed) to activate the image sensor only if information received from the range-detecting sensing arrangement indicates a detected presence within a target zone within the monitored area having a trajectory corresponding to one or more predetermined criteria. Different trajectories may be classified into threatening and nonthreatening classes based for example on whether a particular trajectory would bring the actor to the image capture arrangement. So, for example, a trajectory along a (curved) path that arcs towards the image capture arrangement but which is generally transverse to a normal to the plane of the image sensor of the image capture arrangement is non-threatening, whereas those that lead to the image capture arrangement are likely to be threatening. Machine learning, a trained neural network, or the like may be used to classify and hence distinguish between trajectories that are threatening and those that are not.

Image C represents the addition of zones to the plan of the image field as observed by the low power/thermal sensor 306 (and possibly also as observed by any hardware ranging solution) of an image capture arrangement 300 according to any aspect of the invention. Each of the different zones A-E may have different criteria for determining whether any detected behaviour (e.g. movement, trajectory, or presence for more than a certain time) such as outlined with respect to images A and B, although different zones may also have the same criteria.

Throughout the specification and claims, the expression “configured to” may be taken to mean “programmed” (e.g. in the case of a processor, processing arrangement, or programmable device), or “arranged”, or “programmed and arranged”, rather than merely implying “configurable to”.

The following numbered paragraphs express, in claim-style, various aspects and feature combinations which may be of practical interest to the skilled person not all of which may be embraced by the currently appended claims but which may nevertheless subsequently be claimed in the instant application or its descendants.

Al . An image capture arrangement comprising a processor, and coupled to the processor: an image sensor to capture images of a monitored area; a thermal sensor to detect human presence in the monitored area; and a range-detecting sensing arrangement; wherein in a rest state the image sensor and the range-detecting sensing arrangement are powered down, and the processor is configured in at least one operating mode to respond to a signal from the thermal sensor that indicates human presence in the monitored area by powering up the range-detecting sensing arrangement to determine the range of a detected presence within the monitored area, and the processor is further configured to activate the image sensor only if information received from the range-detecting sensing arrangement indicates the detected presence within a target zone within the monitored area.

A2. The image capture arrangement of Al , wherein the processor is further configured to provide information received from the range-detecting sensing arrangement as an input to an image processing operation performed on images captured by the image sensor.

A3. The image capture arrangement of A2, wherein the image processing operation involves obscuring at least one image portion determined to be more than a threshold distance from the image capture arrangement.

A4. The image capture arrangement of A3, wherein the at least one image portion includes human features of a human determined to be more than the threshold distance from the image capture arrangement.

A5. The image capture arrangement of A2, A3 or A4, wherein the processor is configured to perform the image processing operation.

A6. The image capture arrangement of any of A2 to A5, further comprising an RF transceiver, wherein the processor is configured to use the RF transceiver to transmit image data and information received from the range-detecting sensing arrangement to a remote processor for the remote processor to perform the image processing operation.

A7. The image capture arrangement of any one of A1-A6, wherein the thermal sensor comprises a Thermal MOS, “TMOS”, sensor.

A8. The image capture arrangement of any one of A1-A7, wherein the thermal sensor comprises a PIR sensor.

A9. The image capture arrangement of any one of A1-A8, wherein the processor, image sensor, thermal sensor, and range-detecting sensing arrangement are all housed in a common housing.

A 10. The image capture arrangement of A9 in the form of a video doorbell.

Al l. The image capture device of A9 in the form of a security camera for an alarm system.

A 12. The image capture arrangement of any one of Al -Al 1, wherein in the first operating mode the processor is further configured to activate the image sensor only if information received from the range-detecting sensing arrangement indicates a detected presence persisting within a target zone within the monitored area for more than a predetermined time. A13. The image capture arrangement of A12 wherein the predetermined time is at least 2 seconds, optionally at least 3 seconds, optionally at least 4 seconds, optionally at least 5 seconds, optionally at least 6 seconds, optionally at least 7 seconds, optionally at least 8 seconds, optionally at least 9 seconds, optionally at least 10 seconds..

A 14. The image capture arrangement of any one of Al -Al 3, wherein in the first mode, subsequent to activating the image sensor, the processor powers down the range-detecting sensing arrangement before powering down the image sensor.

Al 5. The image capture arrangement of any one of Al -Al 3 wherein in the first mode, subsequent to activating the image sensor, the processor powers down the range-detecting sensing arrangement and the image sensor substantially simultaneously.

Al 6. The image capture arrangement of any one of Al -Al 5, wherein the range-detecting sensing arrangement comprises a radar system, optionally a low -power radar system.

A17. The image capture arrangement of A16, wherein the radar system operates in the 60GHz band.

Al 8. The image capture arrangement of any one of Al -Al 7, wherein the range-detecting sensing arrangement comprises a time-of-flight detection system, optionally a system based on the use of ultrasound or light.

A19. The image capture arrangement of any one of A1-A18, wherein the image sensor has a longer capture range than the thermal sensor and/or the range-detecting sensing arrangement. A20. An image capture arrangement coupled to a premises security monitoring system, the image capture arrangement optionally according to any one of A1-A19, the image capture arrangement including an image sensor and a processor, the processor of the image capture arrangement being configured to modify its behaviour depending upon a reported arm status of the security monitoring system.

A21. The image capture arrangement of A20, further comprising a range-detecting sensing arrangement.

A22. The image capture arrangement of A21, wherein the processor of the image capture arrangement is configured: in the event that the security monitoring system is in a disarmed state, to avoid activating the image sensor despite the information received from the range-detecting sensing arrangement indicating human presence within a target zone within a monitored area; and, in the event that the security monitoring system is in an armed state, to activate the image sensor if the information received from the range-detecting sensing arrangement indicates human presence within the target zone within the monitored area. A23. An automated method of controlling an image capture arrangement that is arranged to monitor a monitored area, the method comprising: detecting human presence in the monitored area using a thermal sensor; in response to detecting human presence in the monitored area, activating a sensing arrangement to determine the range of detected presence within the monitored area; and activating the image sensor only if information received from the rangedetecting sensing arrangement indicates presence within a target zone within the monitored area. A24. An automated method of capturing images with an image capture arrangement that is arranged to monitor a monitored area, the method comprising: detecting human presence in the monitored area using a thermal sensor; in response to detecting human presence in the monitored area, activating a range-detecting sensing arrangement to determine the range of detected presence within the monitored area; activating the image sensor only if information received from the range-detecting sensing arrangement indicates presence within a target zone within the monitored area; and capturing images of the monitored area.

A25. The method of A23 or A24, further comprising performing an image processing operation performed on images captured by the image sensor using information received from the range-detecting sensing arrangement as an input.

A26. The method of A25, wherein the image processing operation involves obscuring at least one image portion determined to be more than a threshold distance from the image capture arrangement.

A27, The method of A25 or A26, wherein the image processing operation involves obscuring image portions that include human features of any human determined to be more than a threshold distance from the image capture arrangement.

A28. The method of A25, A26 or A27, wherein the image processing operation involves obscuring at least one image portion determined to be outside the target zone.

A29. The method of A25, A26, A27 or A28, wherein the image processing operation involves obscuring image portions that include human features of any human determined to be outside the target zone.

A30. The method of A23 to A29, further comprising transmitting to a remote processor image data from the image capture arrangement and information received from the range-detecting sensing arrangement.

A31. An image capture arrangement comprising a processor, and coupled to the processor: an image sensor to capture images of a monitored area; a thermal sensor to detect human presence in the monitored area; and a radar arrangement; wherein in a rest state the image sensor and the radar are powered down, and the processor is configured in at least one operating mode to respond to a signal from the thermal sensor that indicates human presence in the monitored area by powering up the radar arrangement to determine the range of any human presence within the monitored area, and the processor is further configured to activate the image sensor only if information received from the radar arrangement indicates human presence within a target zone within the monitored area.

A32. The image capture arrangement of A31, wherein the processor is further configured to provide information received from the radar arrangement as an input to an image processing operation performed on images captured by the image sensor.

A33. The image capture arrangement of A32, wherein the image processing operation involves obscuring image portions that include human features of any human determined to be more than a threshold distance from the image capture arrangement.

A34. The image capture arrangement of A32 or A33, wherein the processor is configured to perform the image processing operation.

A35. The image capture arrangement of A32 or A33, further comprising an RF transceiver, wherein the processor is configured to use the RF transceiver to transmit image data and information received from the radar arrangement to a remote processor for the remote processor to perform the image processing operation.

A36. The image capture arrangement of any one of A31 to A35, wherein the thermal sensor comprises a Thermal MOS, “TMOS”, sensor.

A37. The image capture arrangement of any one of A31 to A36, wherein the thermal sensor comprises a PIR sensor.

A38. The image capture arrangement of any one of A31 to A37, wherein the processor, image sensor, thermal sensor, and radar arrangement are all housed in a common housing.

A39. The image capture arrangement of A38 in the form of a video doorbell.

A40. The image capture device of A38 in the form of a security camera for an alarm system. A41. The image capture arrangement of any one of A31 to A40, wherein in the first operating mode the processor is further configured to activate the image sensor only if information received from the radar arrangement indicates human presence persisting within a target zone within the monitored area for more than a predetermined time.

A42. The image capture arrangement of A41 as dependent on A38, wherein the predetermined time is at least 5 seconds.

A43. The image capture arrangement of any one of A31 to A42, wherein in the first mode, subsequent to activating the image sensor, the processor powers down the radar arrangement before powering down the image sensor. A44. The image capture arrangement of any one of claims A31 to A42, wherein in the first mode, subsequent to activating the image sensor, the processor powers down the radar arrangement and the image sensor substantially simultaneously.

A45 An image capture arrangement coupled to a premises security monitoring system, the processor of the image capture arrangement being configured to modify its behaviour depending upon a reported arm status of the security monitoring system.

A46. An image capture arrangement of A45, wherein: in the event that the security monitoring system is in a disarmed state the processor of the image capture arrangement is configured to avoid activating the image sensor despite the information received from the radar arrangement indicating human presence within a target zone within the monitored area; and in the event that the security monitoring system is in an armed state the processor of the image capture arrangement is configured to activate the image sensor if the information received from the radar arrangement indicates human presence within a target zone within the monitored area the processor is configured in at least one operating mode.

A47. The image capture arrangement of any one of A31 to A46, wherein the radar arrangement comprises a low power radar system.

A48. The image capture arrangement of any one of A31 to A47, wherein the image sensor has a longer capture range than the thermal sensor and/or the radar.

A49. The image capture arrangement of any one of A31 to A48, wherein the radar arrangement operates in the 60GHz band.

A50. A method of controlling an image capture arrangement that is arranged to monitor a monitored area, the method comprising: detecting human presence in the monitored area using a thermal sensor; in response to detecting human presence in the monitored area, activating a radar arrangement to determine the range of any human presence within the monitored area; and activating the image sensor only if information received from the radar arrangement indicates human presence within a target zone within the monitored area.

A51. A method of capturing images with an image capture arrangement that is arranged to monitor a monitored area, the method comprising: detecting human presence in the monitored area using a thermal sensor; in response to detecting human presence in the monitored area, activating a radar arrangement to determine the range of any human presence within the monitored area; activating the image sensor only if information received from the radar arrangement indicates human presence within a target zone within the monitored area; and capturing images of the monitored area. A52. The method of A50 or A51, further comprising performing an image processing operation performed on images captured by the image sensor using information received from the radar arrangement as an input.

A53. The method of A52, wherein the image processing operation involves obscuring image portions that include human features of any human determined to be more than a threshold distance from the image capture arrangement.

A54. The method of A52, wherein the image processing operation involves obscuring image portions that include human features of any human determined to be outside the target zone. A55. The method of any of A51 to A54, further comprising transmitting to a remote processor image data from the image capture arrangement and the information received from the radar arrangement.

A56. An image capture arrangement for a security system, comprising an image sensor to capture images of a surveillance area, and a processing arrangement to perform image processing on images captured by the image capture arrangement, wherein the surveillance area includes a primary zone and at least one secondary zone distinct from the primary zone, and wherein the processing arrangement is configured: to produce a processed image in which at least one or more image portions representing features in one or more of said at least one secondary zones, is obscured; wherein in the event of determining the presence of an object of interest within the primary zone, the processed image is produced with features of the object of interest within the primary zone unobscured.

A57. The image capture arrangement of A56, wherein the features in the one or more of said at least one secondary zones that are obscured include features of one or more object of interest, optionally a person.

A58. The image capture arrangement of A56 or claim A57, comprising a range detecting arrangement to provide range information for objects within the surveillance area.

A59. The image capture arrangement of A58, wherein the range detecting arrangement comprises one or more selected from: a radar arrangement; a time-of-flight sensing arrangement; an ultrasonic or optical sensor; an arrangement for detecting range information based at least partly on the images captured by the image capture arrangement.

A60. The image capture arrangement of A58 or A59, wherein the range information includes information on direction as well as distance.

A61. The image capture arrangement of any one of A58 to A60, wherein the range detecting arrangement is configured to provide size information on the size of a detected object. A62. The image capture arrangement of any one of A56 to A61, wherein the primary zone corresponds to a near field zone of the surveillance area, and the one or more secondary zones correspond to one or more far field zones of the surveillance area.

A63. The image capture arrangement of A62, wherein the processing arrangement is configured to use the range information, and optionally size information, in determining whether an object of interest whose image has been captured is within or beyond the near field zone.

A64. The image capture arrangement of A62 or A63, wherein the processing arrangement is also configured to obscure (i) image portions representing non-human objects located beyond the near field zone, and/or (ii) optionally to obscure at least a majority of far field image portions; and/or (iii) optionally to obscure all far field image portions.

A65. The image capture arrangement of any one of A62 to A64, wherein the processing arrangement is configured to define a bounding box around an image portion representing a person within the near field zone.

A66. The image capture arrangement of A65, wherein the processing arrangement is configured to obscure at least a portion of the image outside the or each bounding box.

A67. The image capture arrangement of A65, wherein the processing arrangement is also configured to define a bounding box around any image portion representing a person in a zone beyond the near field zone.

A68. The image capture arrangement of A67, wherein the processing arrangement is configured to obscure the content of any bounding box around any image portion representing a person in a zone beyond the near field zone.

A69. The image capture arrangement of any one of A56 to A68, wherein the processing arrangement and the image capture arrangement are provided in a common housing.

A70. The image capture arrangement of any one of A56 to A68, wherein the processing arrangement is remote from the image capture arrangement.

A71. The image capture arrangement of A70, wherein the image capture arrangement is provided in a housing together with an RF transceiver, the RF transceiver providing a communications link to the remote processing arrangement.

A72. The image capture arrangement of any one of A56 to A71, wherein the image capture arrangement is provided by the video camera of a video doorbell.

A73. An image capture arrangement for a security system, optionally according to any one of Al to A72, the image capture arrangement comprising an image sensor to capture images of a surveillance area, and a processing arrangement to perform image processing on images captured by the image capture arrangement, wherein the processing arrangement is configured to obscure at least one region by a non-blanking technique that preserves at least a portion of image information in said region.

A74. An image capture arrangement for a security system, optionally according to any of any one of Al to A73, the image capture arrangement comprising an image sensor to capture images of a surveillance area, and a processing arrangement to perform image processing on images captured by the image capture arrangement, wherein the processing arrangement is configured to obscure at least one region by any of blurring, painting over, applying a linear algorithm to increase complexity, optionally a box blur or a Gaussian blur, mixing and mirroring, distorting the image such as by imposing twirls or waves on the image, or some combination thereof.

A75. An image capture arrangement for a security system, optionally according to any one of Al to A74, the image capture arrangement comprising an image sensor to capture images of a surveillance area, and a processing arrangement to perform image processing on images captured by the image capture arrangement, wherein the processing arrangement is configured to obscure at least one region using a technique that is effective to remove information permitting the identification of a person, such as facial features, but which retains image information from which a viewer is provided with positional information, optionally enabling spatial awareness on the position of image objects with respect to each other and/or with respect to the image capture arrangement.

A76. The image capture arrangement according to A75, wherein the technique is such that the removed information is substantially non-recoverable in the processed image.

A77. An image capture arrangement for a security system, optionally according to any one of Al to A76, the image capture arrangement comprising an image sensor to capture images of a surveillance area, and a processing arrangement to perform image processing on images captured by the image capture arrangement, wherein the processing arrangement is configured to obscure a first region of the image using a first obscuration technique, and to obscure a second region of the image using a second obscuration technique, the second obscuration technique being different from the first, and the second obscuration technique preserving at least some information from the image in the second region.

A78. The image capture arrangement of any one of Al to A77, wherein the image capture arrangement is configured to store zone data defining the primary zone and the or each secondary zone, the zone data having been acquired during an initial setup process.

A79. The image capture arrangement of any one of Al to A78, wherein the processing arrangement is configured to store an image file corresponding to the processed image. A80. The image capture arrangement of any one of Al to A79, further comprising a transceiver wherein the processing arrangement is configured to use the transceiver to output an image file corresponding to the processed image.

A81. The image capture arrangement of any one of Al to A76, wherein the processing arrangement is configured to activate the image sensor only if information received from the range-detecting sensing arrangement indicates human presence persisting within the primary zone within the surveillance area for more than a predetermined time.

A82. The image capture arrangement of A81 wherein the predetermined time is at least 2 seconds, optionally at least 3 seconds, optionally at least 4 seconds, optionally at least 5 seconds, optionally at least 6 seconds, optionally at least 7 seconds, optionally at least 8 seconds, optionally at least 9 seconds, optionally at least 10 seconds.

A83. The image capture arrangement of in any one of Al to A82 configured as a video doorbell.

A84. The image capture arrangement of any one of Al to A82 in the form of a security camera for an alarm system.

A85. A security monitoring system including an image capture arrangement according to any one of A56 to A84.

A86. A method performed by an image capture arrangement of a security system, the image capture arrangement comprising an image sensor to capture images of a surveillance area, wherein the surveillance area includes a primary zone and at least one secondary zone distinct from the primary zone, the method comprising: producing a processed image in which at least one or more image portions representing features in one or more of said at least one secondary zones, is obscured; wherein in the event of determining the presence of an object of interest within the primary zone, producing the processed image with features of the object of interest within the primary zone unobscured.

A87. The method of A86, obscuring features of one or more object of interest, optionally a person in the one or more of said at least one secondary zones.

A88. The method of A86 or A87, wherein the primary zone corresponds to a near field zone of the surveillance area, and the one or more secondary zones correspond to one or more far field zones of the surveillance area, the method further comprising using range information, and optionally size information, in determining whether an object of interest whose image has been captured is within or beyond the near field zone.

A89. The method of A86 or A87, wherein the primary zone corresponds to a near field zone of the surveillance area, and the one or more secondary zones correspond to one or more far field zones of the surveillance area, the method further comprising obscuring: (i) image portions representing non-human objects located beyond the near field zone, and/or (ii) optionally at least a majority of far field image portions; and/or (iii) optionally all far field image portions.

A90. The method of A86 or A87, wherein the primary zone corresponds to a near field zone of the surveillance area, and the one or more secondary zones correspond to one or more far field zones of the surveillance area, the method further comprising defining a bounding box around an image portion representing a person within the near field zone.

A91. The method of A90, further comprising obscuring at least a portion of the image outside the or each bounding box.

A92. The method of A90, further comprising defining a bounding box around any image portion representing a person in a zone beyond the near field zone.

A93. The method of A92, further comprising, obscuring the content of any bounding box around any image portion representing a person in a zone beyond the near field zone.

A94. A method performed by an image capture arrangement of a security system, optionally according to any one of A86 to A93, the method comprising obscuring at least one region by a non-blanking technique that preserves at least a portion of image information in said region. A95. A method performed by an image capture arrangement of a security system, optionally according to any one of A86 to A94, the method comprising obscuring at least one region by any of blurring, painting over, applying a linear algorithm to increase complexity, optionally a box blur or a Gaussian blur, mixing and mirroring, distorting the image such as by imposing twirls or waves on the image, or some combination thereof.

A96. A method performed by an image capture arrangement of a security system, optionally according to any one of A86 to A95, the method comprising obscuring at least one region using a technique that is effective to remove information permitting the identification of a person, such as facial features, but which retains image information from which a viewer is provided with positional information, optionally enabling spatial awareness on the position of image objects with respect to each other and/or with respect to the image capture arrangement.

A97. A method according to A96, wherein the processing is such that the removed information is substantially non-recoverable in the processed image.

A98. A method performed by an image capture arrangement of a security system, optionally according to any one of A86 to A97, the method comprising obscuring a first region of the image using a first obscuration technique, and to obscure a second region of the image using a second obscuration technique, the second obscuration technique being different from the first, and the second obscuration technique preserving at least some information from the image in the second region. A99. A method performed by an image capture arrangement of a security system, optionally according to any one of A86 to A98, wherein the surveillance area includes a primary zone and at least one secondary zone distinct from the primary zone, the method comprising activating an image sensor of the image capture arrangement only if information received from a rangedetecting sensing arrangement indicates human presence persisting within the primary zone within the surveillance area for more than a predetermined time.

A100. A premises security monitoring system including at least one image capture arrangement according to any one of A1-A22, A31-A49, and A56-A84.

A101. A premises security monitoring system including at least one image capture arrangement configured to perform the method according to any one of A23-A30, A50-A55, and A86-A99. A102. An image capture arrangement comprising a processing arrangement, and coupled to the processing arrangement: a detection arrangement to detect motion in a monitored area, the processing arrangement and the detection arrangement being configured to determine whether detected motion could be from an object of interest; an image sensor to capture images of the monitored area; wherein in a rest state the image sensor is powered down, and the processing arrangement is, in at least one operating mode, configured based at least in part on a determination that detected motion could be from an object of interest to power up the image sensor to capture images of the monitored area, and the processing arrangement is further configured, if it is determined that an object of interest is within a target zone of the monitored area, to report and/or save the processed images; wherein determining whether the object of interest is within a target zone of the observed area is based either on ranging information provided by a ranging arrangement targeting the monitored area or on ranging information determined by performing range estimation on the captured images.

A103. The image capture arrangement of A102, wherein the processing arrangement is further configured to process the captured images to determine the object(s) whose motion was detected and to determine whether the determined object(s) is an object of interest; and if it is determined that the object that triggered motion detection was an object of interest, to determine whether the object of interest is within a target zone of the observed area.

A104. The image capture arrangement of A102 or A103, wherein the processing arrangement is further configured to determine whether the captured images include any outlying objects of interest, being objects of interest outside the target zone, and if there are any such outlying objects of interest to process the captured images to mask/blur/remove image portions corresponding to said outlying objects of interest.

A105. The image capture arrangement of A102 or A103, wherein the monitored area has a near field zone and a far field zone beyond the near field zone, and wherein the processing arrangement is configured to: identify a presence of an object of interest, optionally a person, in the near field zone and, responsive to the identified presence; define a reserved region of the image corresponding to at least a portion of said object of interest, optionally a person, in the image; and obscure a second portion of the image outside the reserved region.

A106. The image capture arrangement of A105, wherein the reserved region is a dynamic reserved region.

A 107. The image capture arrangement of A 105 or claim A 106, wherein the second portion of the image corresponds to at least a portion of the far field zone.

A108. The image capture arrangement of any one of A105 to A107, wherein the processing arrangement is configured to identify a presence of a person in the far field zone, and to set the second portion of the image to correspond at least partly to the position in the image of the person in the far field zone, to obscure features of the person.

A109. The image capture arrangement of any one of A105 to A108, wherein the processing arrangement is configured to apply a far-field mask to the image to define one or more predefined regions of the image associated with the far field, and wherein the second portion of the image corresponds to said one or more pre-defined regions excepting the reserved region.

Al 10. The image capture arrangement of any one of A1-A109, wherein the detection arrangement comprises one or more thermal sensors.

Al li. The image capture arrangement as claimed in any one of Al -Al 10, wherein the ranging information is provided by a ranging arrangement that is integral with the image capture arrangement.

Al 12. The image capture arrangement of Al 11 , wherein the ranging arrangement comprises a time of flight detection system.

Al 13. The image capture arrangement of Al 11 , wherein the ranging arrangement comprises a radar arrangement.

Al 14. The image capture arrangement of Al 13, wherein the radar arrangement is configured to provide directional information as well as ranging information.

Al 15. The image capture arrangement of Al 13 or Al 14, wherein the processing arrangement is further configured to activate the image sensor only if information received from the radar arrangement indicates human presence within the target zone within the monitored area.

Al 16. The image capture arrangement of any one of A102 to Al 10, wherein the ranging information is determined by performing range estimation on the captured images.

Al 17. The image capture arrangement of any one of A102 to Al 16, wherein the thermal sensor comprises a Thermal MOS, “TMOS”, sensor. Al 18. The image capture arrangement of any one of Al -Al 17, wherein the thermal sensor comprises a PIR sensor.

Al 19. The image capture arrangement of any one of Al-Al 18, wherein the processor, image sensor, thermal sensor, and range-detecting sensing arrangement are all housed in a common housing.

A 120. The image capture arrangement of Al 19 in the form of a video doorbell.

A121. The image capture device of Al 19 in the form of a security camera for an alarm system. A122. The image capture arrangement of any one of Al-Al 18, wherein in the first operating mode the processor is further configured to activate the image sensor only if information received from the range-detecting sensing arrangement indicates a detected presence persisting within a target zone within the monitored area for more than a predetermined time.

A 123. The image capture arrangement of A 122 wherein the predetermined time is at least 2 seconds, optionally at least 3 seconds, optionally at least 4 seconds, optionally at least 5 seconds, optionally at least 6 seconds, optionally at least 7 seconds, optionally at least 8 seconds, optionally at least 9 seconds, optionally at least 10 seconds..

A124. The image capture arrangement of any one of A1-A123, wherein in the first mode, subsequent to activating the image sensor, the processor powers down the range-detecting sensing arrangement before powering down the image sensor.

A125. The image capture arrangement of any one of A1-A124, wherein in the first mode, subsequent to activating the image sensor, the processor powers down the range-detecting sensing arrangement and the image sensor substantially simultaneously.

A 126. The image capture arrangement of any one of A1-A125, wherein the range-detecting sensing arrangement comprises a radar system, optionally a low-power radar system.

A127. The image capture arrangement of A126, wherein the radar system operates in the 60GHz band.

A 128. The image capture arrangement of any one of A1-A125, wherein the range-detecting sensing arrangement comprises a time-of-flight detection system, optionally a system based on the use of ultrasound or light.

A129. The image capture arrangement of any one of A1-A128, wherein the image sensor has a longer capture range than the thermal sensor and/or the range-detecting sensing arrangement. Al 30. An image capture arrangement coupled to a premises security monitoring system, the image capture arrangement optionally according to any of Al -Al 29, the image capture arrangement including an image sensor and a processor, the processor of the image capture arrangement being configured to modify its behaviour depending upon a reported arm status of the security monitoring system. Al 31. The image capture arrangement of Al 30, further comprising a range-detecting sensing arrangement.

Al 32. The image capture arrangement of A131, wherein the processor of the image capture arrangement is configured: in the event that the security monitoring system is in a disarmed state, to avoid activating the image sensor despite the information received from the range-detecting sensing arrangement indicating human presence within a target zone within a monitored area; and, in the event that the security monitoring system is in an armed state, to activate the image sensor if the information received from the range-detecting sensing arrangement indicates human presence within the target zone within the monitored area.

A 133. An automated method comprising: i) detecting motion in an observed area; ii) determining whether the detected motion could be from an object of interest; iii) based at least in part on ii) waking a camera and capturing video or multiple images of the observed area; iv) if it is determined that an object of interest is within the target zone, reporting or saving the processed images; wherein the determining of step iv) is based either on ranging information provided by a ranging arrangement targeting the observed area or on ranging information determined by performing range estimation on the captured images.

A134. The method of A133, further comprising processing the captured images to determine the object(s) whose motion was detected and to determine whether the determined object(s) is an object of interest; and if it is determined that the object that triggered motion detection was an object of interest determining whether the object of interest is within a target zone of the observed area.

A135. The method of A133 or A134, carried out on a peripheral device of a security system.

The present application contains a number of self-evidently inter-related aspects and embodiments, generally based around a common set of problems, even if many aspects do have broader applicability. In particular the logic and control methods, whilst not necessarily limited to operating with the hardware disclosed and may be more broadly applied, are all particularly suited to working with the hardware of the various hardware aspects and the preferred variants thereof. It will be appreciated by the skilled person that certain aspects relate to specific instances of other features and the preferred features described or claimed in particular aspects may be applied to others. The disclosure would become unmanageably long if explicit mention were made at every point of the inter-operability and the skilled person is expected to appreciate, and is hereby explicitly instructed to appreciate, that preferred features of any aspect may be applied to any other unless otherwise explicitly stated otherwise or manifestly inappropriate from the context. Again, for the sake of avoiding repetition, many aspects and concepts may be described only in method form or in hardware form but the corresponding apparatus or computer program or logic is also to be taken as disclosed in the case of a method or the method of operating the hardware in the case of an apparatus discussion. For an example of what is meant by the above, there are a number of features of both hardware and software relating to the combination of an image capture arrangement and a security monitoring system and a system of image processing and control by a processor (within the unit or remote or both). Although these are preferred applications, most methods and hardware are more generally applicable to stand alone elements or systems as well as to security monitoring systems incorporating all or only some of these features and/or methods. Moreover, aspects which give particular arrangements for any of the components, or their interaction can be used freely with aspects which focus on alternative elements of the system. The appended claims are presented as distinct sets and are not numbered consecutively in this application nor are they explicitly cross-referenced for ease of following each aspect and for the benefit of subsequent applications. However, all appended claims should be considered to be multiply dependent on all other sets of claims except where they self-evidently relate to incompatible alternatives, as would be understood by a skilled engineer rather than construed in an artificially reductive way relying only on explicit cross references.