Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SECURITY MANAGEMENT SYSTEM
Document Type and Number:
WIPO Patent Application WO/2022/215088
Kind Code:
A1
Abstract:
A method for facilitating security management is provided. A first edge node of a plurality of edge nodes includes a plurality of sensors for monitoring a geographical region. Processing circuitry, included in the edge node, detects an object within the geographical region based on a plurality of sensor outputs of the plurality of sensors. The processing circuitry determines, using a first machine learning model, an intent of the detected object. The processing circuitry provides the determined intent as input to a second machine learning model for threat assessment of the detected object. The processing circuitry assigns a threat level from a plurality of threat levels to the detected object based on an output of the second machine learning model. The processing circuitry initiates a threat alert procedure when the threat level assigned to the detected object exceeds a threshold value.

Inventors:
MAGDUM AASHUTOSH (IN)
Application Number:
PCT/IN2022/050339
Publication Date:
October 13, 2022
Filing Date:
April 07, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MAGDUM AASHUTOSH (IN)
International Classes:
G05B19/00; G05B23/00
Foreign References:
US10819719B22020-10-27
KR20210034887A2021-03-31
EP3515038A12019-07-24
Attorney, Agent or Firm:
SABNIS, Ojas (IN)
Download PDF:
Claims:
CLAIMS

WE CLAIM:

1. A security management system characterized in that, the security management system comprising: an edge node network comprising a plurality of edge nodes (102) that are communicatively coupled to form a mesh network, wherein a first edge node (102a) of the plurality of edge nodes (102) comprises: a first plurality of sensors configured to monitor a geographical region (204) and generate a first plurality of sensor outputs based on the monitoring of the geographical region (204); and first processing circuitry (318) configured to: detect an object within the geographical region (204) based on the first plurality of sensor outputs; determine, using a first machine learning model (602h), an intent of the detected object, wherein the determined intent is at least one of a first type of intent or a second type of intent; provide the determined intent as an input to a second machine learning model (602i) for threat assessment of the detected object; assign a threat level from a plurality of threat levels to the detected object based on an output of the second machine learning model (602i) for the inputted intent; and initiate a threat alert procedure when the threat level assigned to the detected object exceeds a first threshold value.

2. The security management system of claim 1 , wherein the first plurality of sensors include a set of radar sensors (302), a set of infrared sensors (304), a set of visible spectrum sensors 306, and a set of laser rangefinder sensors (308), and wherein the first plurality of sensor outputs include first through fourth sets of sensor outputs generated by the set of radar sensors (302), the set of infrared sensors (304), the set of visible spectrum sensors (306), and the set of laser rangefinder sensors (308), respectively.

3. The security management system of claim 2, wherein the first machine learning model (602h) is trained for intent detection based on a first training dataset, wherein the first training dataset includes a first set of parameters corresponding to a set of objects, and wherein based on the first training dataset, the first machine learning model (602h) is configured to learn a correlation between a type of intent and the first set of parameters.

4. The security management system of claim 2, wherein the first edge node (102a) further includes a second plurality of sensors that are configured to generate a second plurality of sensor outputs, wherein the second plurality of sensors include a set of light detection and ranging (LiDAR) sensors (310), a set of acoustic sensors (312), and a set of inertial movement unit sensors (314), and wherein the second plurality of sensor outputs include fifth through seventh sets of sensor outputs generated by the set of LiDAR sensors (310), the set of acoustic sensors (312), and the set of inertial movement sensors (314), respectively.

5. The security management system of claim 4, wherein the first processing circuitry (318) is further configured to: determine, based on at least one of the first through sixth sets of sensor outputs, a likelihood of a presence of the object in the geographical region; detect, based on the seventh set of sensor outputs, a current orientation of at least one of first through sixth sets of sensors (302-312), wherein the first through sixth sets of sensors (302-312) includes the set of radar sensors (302), the set of infrared sensors (304), the set of visible spectrum sensors (306), the set of laser rangefinder sensors (308), the set of LiDAR sensors (310), and the set of acoustic sensors (312), respectively; and adjust the current orientation of at least one of the first through sixth sets of sensors (302-312) to focus in a direction where the object is likely to be present, when the likelihood of the presence of the object exceeds a second threshold value, wherein the detection of the object is further based on the second plurality of sensor outputs.

6. The security management system of claim 5, wherein the first processing circuitry (318) is further configured to determine, based on the first and second pluralities of sensor outputs, a current position and a current velocity of the detected object.

7. The security management system of claim 6, wherein the first processing circuitry (318) is further configured to: track a movement of the detected object in the geographical region (204), based on the first and second pluralities of sensor outputs; and forecast a trajectory of the detected object in the geographical region (204) based on the tracked movement of the detected object, wherein the forecasted trajectory includes a series of forecasted positions and forecasted velocities of the detected object.

8. The security management system of claim 7, wherein the second machine learning model (602i) is trained based on a second training dataset for threat level assessment, wherein the second training dataset includes a second set of parameters corresponding to a set of objects, and wherein based on the second training dataset, the second machine learning model (602i) is configured to learn a correlation between the plurality of threat levels and the second set of parameters.

9. The security management system of claim 7, wherein the first edge node (102a) further includes a database (328) configured to store a terrain signature of the geographical region (204), and wherein the detection of the object is further based on the terrain signature.

10. The security management system of claim 9, further comprising a remote node (104) that is communicatively coupled to the first edge node (102a) and includes second processing circuitry (502), wherein the first processing circuitry (318) is further configured to communicate the terrain signature to the remote node (104).

11. The security management system of claim 10, wherein the first processing circuitry 318 is further configured to: generate, based on the initiation of the threat alert procedure, a threat alert indicative of an entity identifier, the assigned threat level, the current position, the current velocity, and the forecasted trajectory of the detected object; and communicate the threat alert to the remote node (104).

12. The security management system of claim 11, wherein the second processing circuitry (502) is configured to: select, from a plurality of user devices (106), a set of user devices suitable for receiving the threat alert, wherein the selection of the set of user devices is based on the threat level and an authorization level of each of the set of user devices; and communicate the threat alert to the selected set of user devices.

13. The security management system of claim 11, wherein the second processing circuitry (502) is configured to generate a virtual environment, based on the received terrain signature and the threat alert, and wherein the generated virtual environment displays a digital version of the geographical region (104) overlaid with the forecasted trajectory of the detected object.

14. The security management system of claim 10, wherein the first processing circuitry (318) is further configured to: generate, based on the initiation of the threat alert procedure, a threat alert indicative of an entity identifier, the assigned threat level, the current position, the current velocity, and the forecasted trajectory of the detected object; select, from a plurality of user devices (106), a set of user devices suitable for receiving the threat alert, wherein the selection of the set of user devices is based on the threat level and an authorization level of each of the set of user devices; and communicate the threat alert to the selected set of user devices.

Description:
SECURITY MANAGEMENT SYSTEM BACKGROUND

FIELD OF THE INVENTION

Various embodiments of the disclosure relate generally to security management. More specifically, various embodiments of the disclosure relate to methods and systems for automated security management for a secure area.

DESCRIPTION OF THE RELATED ART Security constitutes an important aspect of the modem world. Various types of assets such as factories, oil refineries, wind farms, solar farms, aircraft hangars, airports, or the like, require security for protection against sabotage or intrusion. Round-the-clock security is mandatory for preventing unauthorized access of these assets.

Security systems typically include a combination of video surveillance and manual oversight by security personnel. For example, a security system at a factory may involve security personnel monitoring security camera feed at the factory, patrolling the factory, or the like. However, such security systems prove inefficient when large spaces (e.g., oil refineries, aircraft hangars, or wind farms) are to be guarded. Assets that have a large physical footprint require maintenance of a large security force, which may be prohibitively expensive. Such assets may also require installation of a lot of security cameras and manual monitoring of video feed from these security systems. Therefore, security for these assets is neither efficient nor fool-proof.

In light of the foregoing, there exists a need for a technical and reliable solution that overcomes the abovementioned problems and ensures effective security management for assets. Further areas of applicability of the invention will become apparent from the detailed description provided hereinafter. It should be understood that the detailed description of exemplary embodiments is intended for illustration purposes only and is, therefore, not intended to limit the scope of the invention.

OBJECTS OF THE INVENTION

An object of the present invention is to provide a system of security management. SUMMARY

In an embodiment of the present disclosure, a security system is provided. The security system includes an an edge node network including a plurality of edge nodes that are communicatively coupled to form a mesh network. A first edge node of the plurality of edge nodes includes a first plurality of sensors configured to monitor a geographical region and generate a first plurality of sensor outputs based on the monitoring of the geographical region. The first edge node further includes first processing circuitry configured to detect an object within the geographical region based on the first plurality of sensor outputs. The first processing circuitry is further configured to determine, using a first machine learning model, an intent of the detected object, wherein the determined intent is at least one of a first type of intent or a second type of intent. The first processing circuitry is further configured to provide the determined intent as an input to a second machine learning model for threat assessment of the detected object. The first processing circuitry is further configured to assign a threat level from a plurality of threat levels to the detected object based on an output of the second machine learning model for the inputted intent. The first processing circuitry is further configured to initiate a threat alert procedure when the threat level assigned to the detected object exceeds a first threshold value.

These and other features and advantages of the present disclosure may be appreciated from a review of the following detailed description of the present disclosure, along with the accompanying figures in which like reference numerals refer to like parts throughout. BRIEF DESCRIPTION OF DRAWINGS The features of the present invention, which are believed to be novel, are set forth with particularity in the appended claims. Embodiments of the present invention will herein after be described in conjunction with the appended drawings provided to illustrate and not to limit the scope of the claims, wherein like designations denote like elements, and in which:

FIG. 1 is a block diagram that illustrates a system environment for security management for a secure area, in accordance with an exemplary embodiment of the disclosure;

FIG. 2 is a block diagram that illustrates a layout of an edge node network of FIG. 1 in a geographical region, in accordance with an exemplary embodiment of the present disclosure;

FIG. 3 is a block diagram that illustrates an edge node of FIG. 1, in accordance with an exemplary embodiment of the present disclosure; FIG. 4 is a block diagram that illustrates processing circuitry of FIG. 3, in accordance with an exemplary embodiment of the present disclosure;

FIG. 5 is a block diagram that illustrates a remote node of FIG. 1, in accordance with an exemplary embodiment of the present disclosure;

FIG. 6 is a block diagram that illustrates a machine learning engine of FIG. 5, in accordance with an exemplary embodiment of the present disclosure;

FIGS. 7A-7F, collectively, represent a process flow diagram that illustrates detection and threat assessment of an object in the geographical region, in accordance with an exemplary embodiment of the present disclosure;

FIG. 8 represents a process flow diagram that illustrates communication of a threat alert, in accordance with an exemplary embodiment of the present disclosure; FIG. 9 represents a process flow diagram that illustrates communication of the threat alert, in accordance with another exemplary embodiment of the present disclosure;

FIGS. 10A-10C collectively, represent a flow chart that illustrates a method for facilitating security management for the secure area, in accordance with an exemplary embodiment of the present disclosure;

FIG. 11 represents a flow chart that illustrates a method for facilitating security management for the secure area, in accordance with an exemplary embodiment of the present disclosure; and

FIG. 12 is a block diagram that illustrates a system architecture of a computer system 1200 for facilitating security management for the secure area, in accordance with an exemplary embodiment of the disclosure.

DETAILED DESCRIPTION OF EMBODIMENTS

Certain embodiments of the disclosure may be found in the disclosed systems and methods for facilitating security management for a secure area. Exemplary aspects of the disclosure provide a security management system. The security management system includes an edge node network that includes a plurality of edge nodes that are communicatively coupled to form a mesh network. A first edge node of the plurality of edge nodes includes a first plurality of sensors configured to monitor a geographical region and generate a first plurality of sensor outputs based on the monitoring of the geographical region. The security management system further includes first processing circuitry configured to detect an object within the geographical region based on the first plurality of sensor outputs. The first processing circuitry is further configured to determine, using a first machine learning model, an intent of the detected object. The determined intent is at least one of a first type of intent or a second type of intent. The first processing circuitry is further configured to provide the determined intent as an input to a second machine learning model for threat assessment of the detected object. The first processing circuitry is further configured to assign a threat level from a plurality of threat levels to the detected object based on an output of the second machine learning model for the inputted intent. The first processing circuitry is further configured to initiate a threat alert procedure when the threat level assigned to the detected object exceeds a first threshold value. In some embodiments, the first plurality of sensors include a set of radar sensors, a set of infrared sensors, a set of visible spectrum sensors, and a set of laser rangefinder sensors. The first plurality of sensor outputs include first through fourth sets of sensor outputs generated by the set of radar sensors, the set of infrared sensors, the set of visible spectrum sensors, and the set of laser rangefinder sensors, respectively.

In some embodiments, the first machine learning model is trained for intent detection based on a first training dataset. The first training dataset includes a first set of parameters corresponding to a set of objects. Based on the first training dataset, the first machine learning model is configured to learn a correlation between a type of intent and the first set of parameters. In some embodiments, the first edge node further includes a second plurality of sensors that are configured to generate a second plurality of sensor outputs. The second plurality of sensors include a set of light detection and ranging (LiDAR) sensors, a set of acoustic sensors, and a set of inertial movement unit sensors. The second plurality of sensor outputs include fifth through seventh sets of sensor outputs generated by the set of LiDAR sensors, the set of acoustic sensors, and the set of inertial movement sensors, respectively.

In some embodiments, the first processing circuitry is further configured to determine, based on at least one of the first through sixth sets of sensor outputs, a likelihood of a presence of the object in the geographical region. The first processing circuitry is further configured to detect, based on the seventh set of sensor outputs, a current orientation of at least one of first through sixth sets of sensors. The first processing circuitry is further configured to adjust the current orientation of at least one of the first through sixth sets of sensors to focus in a direction where the object is likely to be present, when the likelihood of the presence of the object exceeds a second threshold value. The detection of the object is further based on the second plurality of sensor outputs. In some embodiments, the first processing circuitry is further configured to determine, based on the first and second pluralities of sensor outputs, a current position and a current velocity of the detected object. In some embodiments, the first processing circuitry is further configured to track a movement of the detected object in the geographical region, based on the first and second pluralities of sensor outputs. The first processing circuitry is further configured to forecast a trajectory of the detected object in the geographical region based on the tracked movement of the detected object. The forecasted trajectory includes a series of forecasted positions and forecasted velocities of the detected object.

In some embodiments, the second machine learning model is trained based on a second training dataset for threat level assessment. The second training dataset includes a second set of parameters corresponding to a set of objects. Based on the second training dataset, the second machine learning model is configured to learn a correlation between the plurality of threat levels and the second set of parameters.

In some embodiments, the first edge node further includes a database configured to store a terrain signature of the geographical region. The detection of the object is further based on the terrain signature. In some embodiments, the first processing circuitry is further configured to generate, based on the initiation of the threat alert procedure, a threat alert indicative of an entity identifier, the assigned threat level, the current position, the current velocity, and the forecasted trajectory of the detected object. The first processing circuitry is further configured to communicate the threat alert to the remote node. In some embodiments, one of the first and second processing circuitries is configured to select, from a plurality of user devices, a set of user devices suitable for receiving the threat alert. The selection of the set of user devices is based on the threat level and an authorization level of each of the set of user devices. The one of the first and second processing circuitries is configured to communicate the threat alert to the selected set of user devices. In some embodiments, the second processing circuitry is configured to generate a virtual environment, based on the received terrain signature and the threat alert. The generated virtual environment displays a digital version of the geographical region overlaid with the forecasted trajectory of the detected object. FIG. 1 is a block diagram that illustrates a system environment for facilitating security management for a secure area, in accordance with an exemplary embodiment of the disclosure. Referring to FIG. 1, there is shown a system environment 100 that includes first through n th edge nodes 102a-102n, a remote node 104, and first through fifth user devices 106a-106e. The first through n th edge nodes 102a-102n are designated and referred to as “plurality of edge nodes 102”. The first through fifth user devices 106a-106e are referred to as “plurality of user devices 106”. The plurality of edge nodes 102, the remote node 104, and the plurality of user devices 106 communicate by way of a communication network 108.

Each of the plurality of edge nodes 102 may include suitable logic, circuitry, interfaces, and/or code, executable by the circuitry, for monitoring a geographical region (or a section of the geographical region) surrounding a secure area. The secure area may refer to any area, structure, or installation, access to which is restricted. The geographical region may constitute a buffer zone for the secure area. In other words, any entity intending to enter the secure area may be required to pass through the geographical region to reach the secure area. Examples of the secure area may include, but is not limited to, oil refineries, airports, solar farms, wind farms, electrical substations, air hangars, a residential complex, a mall, an office complex, or the like. In a non limiting example, it is assumed that the secure area is an oil refinery and that the buffer zone (i.e., the geographical region) covers any area that falls within a twelve-kilometer (km) radius of the secure area. The plurality of edge nodes 102 may be deployed in the geographical region (shown in FIG. 2) surrounding the secure area (shown in FIG. 2) to monitor the geographical region.

A first edge node 102a may be a structure (e.g., a tower) that includes, therein, various sensors (i.e., a sensor suite) for monitoring the geographical region. Examples of the sensors may include, but are not limited to, radar sensors, infrared (IR) sensors, visible spectrum sensors, laser rangefinder sensors, light detecting and ranging (LiDAR) sensors, acoustic sensors, or the like. The first edge node 102a may further include first processing circuitry that is configured to process sensor outputs or sensor data generated by the sensors. The generated sensor outputs may be processed for detecting, monitoring, and tracking various moving objects (e.g., humans, animals, vehicles, or the like) within the geographical region. The first processing circuitry may be further configured to assign threat levels to objects detected in the geographical region. The first edge node 102a may then initiate threat alert procedures and generate threat alerts based on the threat levels assigned to the detected objects.

It is assumed that the remaining edge nodes 102b-102n may be structurally and functionally similar to the first edge node 102a. The plurality of edge nodes 102 collectively constitute an edge node network that is configured to monitor an entirety of the geographical region. The edge node network is explained in conjunction with FIG. 2.

The remote node 104 includes suitable circuitry logic, circuitry, interfaces, and/or code, executable by the circuitry, for managing the plurality of edge nodes 102 (i.e., managing the edge network). For example, the remote node 104 may be configured to introduce new edge nodes to the edge node network or remove existing edge nodes from the edge node network. The remote node 104 may be further configured to receive information from the plurality of edge nodes 102. In a non-limiting example, the remote node 104 may be a command-and-control center that receives information (e.g., threat alerts, sensor outputs, or the like) from the plurality of edge nodes 102. The remote node 104 may be configured to communicate the received threat alerts to a set of user devices of the plurality of user devices 106. In some embodiments, the remote node 104 may be further configured to initiate threat responses for various threat alerts received from the plurality of edge nodes 102.

Each of the plurality of user devices 106 includes suitable circuitry logic, circuitry, interfaces, and/or code, executable by the circuitry, for receiving the threat alerts from the remote node 104 or the plurality of edge nodes 102. In some embodiments, each user device may be associated with an authorization level (e.g., a security clearance level). In a non-limiting example, security for the secure area may be handled by a group or team of individuals organized in a hierarchy. In one embodiment, the team of individuals may include a total of five individuals, of which first through third individuals are ranked “officer”, a fourth individual is ranked “senior officer”, and a fifth individual is ranked “supervisor”. The first through fifth user devices 106a-106e may be possessed by the first through fifth individuals, respectively. In such a scenario, the first through third user devices 106a- 106c may be associated with a first authorization level, the fourth user device 106d with a second authorization level, and the fifth user device 106e with a third authorization level, such that the third authorization level is higher than the second authorization level, and the second authorization level is higher than the first authorization level.

The remote node 104 or the plurality of edge nodes 102 may be configured to communicate the threat alerts to the first through fifth user devices 106a-106e based on a threat level associated with each of the received threat alerts and an authorization level of each of the first through fifth user devices 106a-106e. For the sake of brevity, only five user devices (i.e., the first through fifth user devices 106a-106e) are shown, each of which is associated with one of three authorization levels (i.e., the first through third authorization levels). However, it will be apparent to those of skill in the art that the plurality of user devices 106 may include any number of user devices associated with any number of authorization levels, without deviating from the scope of the disclosure.

Examples of the plurality of user devices 106 include, but are not limited to, personal computers (PCs), laptops, smartphones, tablets, phablets, smartwatches, or the like.

The communication network 108 is a medium through messages and information are transmitted between the plurality of edge nodes 102, the remote node 104, and the plurality of user devices 106. Examples of the communication network 108 include, but are not limited to, a Wi-Fi network, a light fidelity (Li-Fi) network, a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a satellite network, the Internet, a fiber optic network, a coaxial cable network, an infrared (IR) network, a radio frequency (RF) network, and combinations thereof. Various entities in the environment 100 may connect to the communication network 108 in accordance with various wired and wireless communication protocols, such as Transmission Control Protocol and Internet Protocol (TCP/IP), User Datagram Protocol (UDP), Long Term Evolution (LTE) communication protocols, or any combination thereof. In operation, the plurality of edge nodes 102 (i.e., the edge node network) may be deployed or installed in the geographical region that is outside the secure area. The plurality of edge nodes 102 are communicatively coupled to form a mesh network. The first edge node 102a may include first and second pluralities of sensors. The first plurality of sensors may include a set of radar sensors, a set of infrared (IR) sensors, a set of visible spectrum sensors, and a set of laser rangefinder sensors. The second plurality of sensors may include a set of LiDAR sensors, a set of acoustic sensors, and a set of inertial measurement unit (IMU) sensors. Operation of the first and second pluralities of sensors is explained in conjunction with FIG. 3. The first plurality of sensors may generate a first plurality of sensor outputs based on a monitoring of the geographical region. The first plurality of sensor outputs may include a first set of sensor outputs of the set of radar sensors, a second set of sensor outputs of the set of IR sensors, a third set of sensor outputs of the set of visible spectrum sensors, and a fourth set of sensor outputs of the set of laser rangefinder sensors. Similarly, the second plurality of sensors may generate a second plurality of sensor outputs. The second plurality of sensor outputs may include a fifth set of sensor outputs of the set of LiDAR sensors, a sixth set of sensor outputs of the set of acoustic sensors, and a seventh set of sensor outputs of the set of IMU sensors.

The first edge node 102a may further include first processing circuitry (shown in FIG. 3) that may be configured to determine a likelihood (i.e., a probability) of a presence of an object in the geographical region based on at least one of the first through sixth sets of sensor outputs. For the sake of brevity, the terms “likelihood” and “probability” are used interchangeably throughout the disclosure. For example, the first processing circuitry may determine the probability of an object being present in the geographical environment based on the first set of sensor outputs. Further, the first processing circuitry may detect a current orientation (e.g., a tilt angle, a height of positioning, or the like) of each of the first through sixth sets of sensors based on the seventh set of sensor outputs generated by the set of IMU sensors. If the determined likelihood of an object being present in the geographical region exceeds a threshold value (i.e., a threshold probability), the first processing circuitry may adjust the current orientation of the first through sixth sets of sensors. For example, the first processing circuitry may determine that a moving object is likely to be present towards the west of the first edge node 102a based on the first set of sensor outputs. Based on the determination that an object is likely to be present towards the west of the first edge node 102a, the first processing circuitry may detect whether each sensor of the first plurality of sensors is oriented (i.e., the current orientation) towards the west of the first edge node 102a. In a non-limiting example, the first processing circuitry may detect that the current orientation of the set of IR sensors is not towards the west of the first edge node 102a. Consequently, the first processing circuitry may adjust the current orientation of the set of IR sensors to orient the set of IR sensors towards the west of the first edge node 102a where the object is likely to be present. In other words, the first processing circuitry may adjust the current orientation of the set of IR sensors such that a field-of-vision of the set of IR sensors includes an area that is to the west of the first edge node 102a where the object is likely to be present.

Consequently, the first processing circuitry may detect an object in the geographical region, based on a combination of two or more of the first through sixth sets of sensor outputs. Following the detection of the object, the first processing circuitry may determine a current position and a current velocity of the detected object, based on the first and second pluralities of sensors outputs. The first processing circuitry may track a movement of the detected object in the geographical region based on the first and second pluralities of sensors outputs. Further, the first processing circuitry may forecast a trajectory of the detected object in the geographical region based on the tracked movement of the detected object. The forecasted trajectory may be indicative of a series of forecasted positions and a series of forecasted velocities of the detected object.

The first processing circuitry may determine, using a first machine learning model, an intent of the detected object, based on at least a type (e.g., human, vehicle, animal, or the like) of the detected object and the forecasted trajectory of the detected object. The intent of the detected object may be one of a first type of intent or a second type of intent. In an embodiment, the first type of intent may correspond to “suspicious”, while the second type of intent may correspond to “non-suspicious”. In a non-limiting example, the detected object may be determined as having the second type of intent if the detected object is an animal (e.g., a cow) moving across the geographical region with a low velocity. Similarly, the detected object may be determined as having the first type of intent, if the detected object is a human moving towards the secure area (i.e., the future trajectory) at a high velocity.

For threat assessment of the detected object, the first processing circuitry may provide the determined intent of the object as input to a second machine learning model. Based on an output of the second machine learning model, the first processing circuitry may assign a threat level to the detected object. In an embodiment, the assigned threat level may correspond to one of six levels, for example, “0-6”. A higher value of the assigned threat level may be indicative of a higher likelihood of the detected object being a threat to the secure area. For example, if the detected object is assigned a threat level “0”, it may be assumed that the detected object is not deemed to be a threat. In other words, there is little to no chance of the detected object attempting to enter or intrude into the secure area.

However, if the assigned threat level exceeds a threshold value (e.g., “0”), the first processing circuitry may initiate a threat alert procedure. In other words, if the assigned threat level is greater than “0”, the first processing circuitry initiates a threat alert procedure. Based on the initiation of the threat alert procedure, the first processing circuitry may communicate a threat alert to the remote node 104. The threat alert may be indicative of an entity identifier, the assigned threat level, the current position, the current velocity, and the forecasted trajectory of the detected object. The first processing circuitry may further communicate, to the remote node 104, a terrain signature (i.e., a terrain map) of the geographical region. The terrain signature may indicate a type of terrain in the geographical region, an elevation of the terrain in the geographical region, a foliage coverage in the geographical region, a type of foliage in the geographical region, or the like. The terrain signature may further indicate current weather conditions in the geographical region, forecasted weather conditions in the geographical region, a level of visibility in the geographical region, or the like.

The remote node 104 may communicate the received threat alert to a set of user devices of the plurality of user devices 106 based on the corresponding authorization level. In a non-limiting example, if the assigned threat level is low (e.g., “1-3”), the remote node 104 may communicate the received threat alert to a set of user devices of the first through third user devices 106a- 106c (i.e., user devices with low authorization level). In other words, the received threat alert may not be communicated to the fourth user device 106d or the fifth user device 106e. If the assigned threat level is moderate (e.g., “3” or “4”), the remote node 104 may communicate the received threat alert to the fourth user device 106d. in such a scenario, the threat alert may or may not be communicated to other user devices (e.g., the first through third user devices 106a-106c and the fourth user device 106d). However, if the assigned threat level is high (e.g., “5” or “6”), the remote node 104 may communicate the received threat alert to the fifth user device 106e (i.e., user device with highest authorization level). In such a scenario, the threat alert may or may not be communicated to other user devices (e.g., the first through fourth user devices 106a-106d). In some embodiments, the remote node 104 may initiate a threat response procedure based on the received threat alert.

FIG. 2 is a block diagram 200 that illustrates a layout of the edge node network in the geographical region, in accordance with an exemplary embodiment of the present disclosure. FIG. 2 illustrates the plurality of edge nodes 102, the remote node 104, the secure area, and the geographical region. Hereinafter, the secure area and the geographical region are designated and referred to as “the secure area 202” and “the geographical region 204”, respectively. For the sake of brevity, only the first through eighth edge nodes 102a-102h, of the plurality of edge nodes 102, are shown. The first through eighth edge nodes 102a-102h (and remaining edge nodes of the plurality of edge nodes 102) may be located in the geographical region 204 and outside the secure area 202. Each of the first through eighth edge nodes 102a-102h may be configured to monitor a section of the geographical region 204. In one embodiment, each of the plurality of edge nodes 102 may be configured to monitor an area (i.e., a section) within an eight-km radius of a location of the corresponding edge node. For example, the first edge node 102a may be configured to monitor a section of the geographical region 204 that is within an eight-km radius of the first edge node 102a. Hereinafter, the section of the geographical region 204 that is to be monitored by the first edge node 102a is referred to as “first section”. In a current embodiment, it is assumed that each of the plurality of edge nodes 102 is configured to monitor an area within a radius of 8 km therefrom. However, in another embodiment, the plurality of edge nodes 102 may be configured to monitor areas of varying sizes, without deviating from the scope of the disclosure. The plurality of edge nodes 102 (e.g., the first through eighth edge nodes 102a-102h) may be located such that an entirety of the geographical region 204 is monitored.

The remote node 104 may be located within the secure area 202, within the geographical region 204, or outside the geographical region 204. In a non-limiting example, it is assumed that the remote node 104 is located outside the geographical region 204.

The plurality of edge nodes 102 may be communicatively coupled to each other based on various network topologies. Examples of the network topologies may include, but are not limited to, mesh topology, star topology, a bus topology, a ring topology, a hybrid topology, or the like. For the sake of brevity, it is assumed that the first through n th edge nodes 102a-102n are connected to form a mesh network (i.e., mesh topology). The plurality of edge nodes 102 may be connected to each other by way of the communication network 108 or any other network, without deviating from the scope of the disclosure. As shown in FIG. 2, the first through eighth edge nodes 102a-102h are connected to form a mesh network. In other words, the edge node network is a mesh network. For example, the first edge node 102a may be communicatively coupled (i.e., connected) to the second edge node 102b and the eighth edge node 102h. Similarly, the second edge node 102b may be communicatively coupled to the third edge node 102c and the fourth edge node 102d, in addition to the first edge node 102a. It will be apparent to those of skill in the art that the first through eighth edge nodes 102a-102h may be connected in any configuration, without deviating from the scope of the disclosure.

In a non-limiting example, the remote node 104 is shown to be connected (i.e., communicatively coupled) to only the first and seventh edge nodes 102a and 102g. Any message (e.g., threat alert) or information (e.g., the terrain signature of the geographical region 204) that is to be communicated by an edge node, to the remote node 104, is to be routed through the first edge node 102a or the seventh edge node 102g. For example, the fifth edge node 102e may communicate a message to the remote node 104, by way of the fourth edge node 102d, the second edge node 102b, and the first edge node 102a. Operation of a mesh network is well known to those of skill in the art.

FIG. 3 is a block diagram that illustrates an edge node of the plurality of edge nodes 102, in accordance with an exemplary embodiment of the present disclosure. For the sake of brevity, FIG. 3 is assumed to illustrate the first edge node 102a. The first edge node 102a is shown to include the first and second pluralities of sensors. Hereinafter, the set of radar sensors, the set of IR sensors, the set of visible spectrum sensors, and the set of laser rangefinder sensors are designated and referred to as “the set of radar sensors 302”, “the set of IR sensors 304”, “the set of visible spectrum sensors 306”, and “the set of laser rangefinder sensors 308”, respectively. Similarly, the set of LiD AR sensors, the set of acoustic sensors, and the set of IMU sensors are designated and referred to as “the set of LiDAR sensors 310”, “the set of acoustic sensors 312”, and “the set of IMU sensors 314”, respectively. For the sake of brevity, the set of radar sensors 302, the set of IR sensors 304, the set of visible spectrum sensors 306, and the set of laser rangefinder sensors 308 are collectively referred to as “first through fourth sets of sensors 302- 308”, respectively. Similarly, the set of LiDAR sensors 310, the set of acoustic sensors 312, and the set of IMU sensors 314 are collectively referred to as “fifth through seventh sets of sensors 310-314”.

The first edge node 102a may further include a global navigation satellite system (GNSS) device 316. The first edge node 102a may further include the first processing circuitry (hereinafter, designated and referred to as “the first processing circuitry 318”), a first machine learning engine 320, a first memory 322, and a first network interface 324. The first and second pluralities of sensors, the GNSS device 316, the first processing circuitry 318, the first machine learning engine 320, the first memory 322, and the first network interface 324 may communicate with each other by way of a first communication bus 326.

In a non-limiting example, the first and second pluralities of sensors are co-located in the first edge node 102a. In other words, the first and second pluralities of sensors are located on a same structure (i.e., the first edge node 102a). In another embodiment, one or more sensors, of the first and second pluralities of sensors, may be located away from a location of the first edge node 102a. For example, the set of radar sensors 302 may be located at distance of “2” km from the first edge node 102a (i.e., away from the first processing circuitry 318). In such a scenario, the set of radar sensors 302 may be associated with a transceiver or a network interface (e.g., a second network interface; not shown) that is communicably coupled to the first network interface 324. The first set of sensor outputs generated by the set of radar sensors 302 may be communicated to the first processing circuitry 318 by way of the coupling between the first network interface 324 and the network interface associated with the set of radar sensors 302. A combination of the set of radar sensors 302 and the associated network interface may be referred to as an “ad-hoc” node. In an embodiment, one or more ad-hoc nodes may be introduced in the edge node network if greater coverage of the geographical region 204 is required.

In a non-limiting example, it is assumed that each of the first and second pluralities of sensors present on the first edge node 102a is coupled to an electro-mechanical mount (not shown). The first processing circuitry 318 may adjust an orientation (e.g., a tilt angle, a height from ground surface, or the like) of any sensor included in the first edge node 102a by controlling a corresponding electro-mechanical mount. For example, the first processing circuitry 318 may adjust a current orientation of the set of LiDAR sensors 310 to point to west of the first edge node 102a by communicating appropriate instructions to a corresponding electro-mechanical mount. Similarly, the first processing circuitry 318 may increase a height at which the set of LiDAR sensors 310 is located from the ground surface, by communicating appropriate instructions to a corresponding electro-mechanical mount.

The set of radar sensors 302 includes one or more radars configured to transmit radio waves and receive radio waves that have been reflected by any objects in the geographical region 204 (i.e., the first section). The set of radar sensors 302 may include sky-wave radars, surface- wave radars, or a combination thereof. Each radar sensor of the set of radar sensors 302 may include a single antenna or multiple antennae for transmitting radio waves and receiving reflected radio waves. In some embodiments, the set of radar sensors 302 may include one or more processors configured for processing the first set of sensor outputs generated by the set of radar sensors 302 and determining properties of objects (i.e., moving objects) in the geographical region 204 (i.e., in the first section). For example, the one or more processors may determine a velocity of an object detected by the set of radar sensors 302 and a current position of the incoming object based on the first set of sensor outputs. The set of IR sensors 304 may include one or more IR sensors configured to detect or identify objects in the geographical region 204 (i.e., the first section) based on IR radiation emitted by the objects. The set of IR sensors 304 may include passive IR sensors, active IR sensors, or a combination thereof. The set of IR sensors 304 may generate the second set of sensor outputs. The second set of sensor outputs may be indicative of a temperature (i.e., a heat map) of an area around the set of IR sensors 304. The set of IR sensors 304 may be indicative of temperature profiles of various objects in the first section of the geographical region 204.

The set of visible spectrum sensors 306 may include one or more cameras configured to generate an image or video feed of an area (e.g., the first section) within a field-of-vision of the set of visible spectrum sensors 306. The set of visible spectrum sensors 306 may include dome cameras, bullet cameras, internet protocol (IP) cameras, or a combination thereof. The third set of sensor outputs generated by the set of visible spectrum sensors 306 may include images and/or videos of the first section as captured by the set of visible spectrum sensors 306.

The set of laser rangefinder sensors 308 may include one or more laser telemeters configured to detect or identify objects in the geographical region 204 (i.e., the first section). A laser telemeter uses laser beams to determine a distance to between an object and the laser telemeter. A laser rangefinder sensor may operate on a time-of-flight principle by transmitting a laser pulse in a narrow beam towards an object and measuring a time taken by the laser pulse to be reflected off the object and returned to the set of laser rangefinder sensors 308. The fourth set of sensor outputs generated by the set of laser rangefinder sensors 308 may be indicative of times taken by laser beams, transmitted by the set of laser rangefinder sensors 308, to be received after being reflected by objects in the first section. The fourth set of sensor outputs may be further indicative of a wavelength and a frequency of the transmitted laser beams, a time of transmission of the laser beams, a time of reception of the laser beams, and a wavelength and a frequency of the received laser beams. The fourth set of sensor outputs may be further indicative of a distance of each object from the set of laser rangefinder sensors 308 (i.e., from the first edge node 102a).

The set of LiDAR sensors 310 may include one or more LiDAR sensors configured to detect or identify objects in the geographical region 204 (i.e., the first section), based on laser light transmitted and received post reflection by the objects. The set of LiDAR sensors 310 may also operate on the time-of-flight principle. The fifth set of sensor outputs may include, but is not limited to, a time of transmission of laser light, a wavelength and a frequency of transmitted laser light, a time of reception of reflected laser light, and a wavelength and a frequency of the reflected laser light, or the like.

The set of acoustic sensors 312 may include one or more acoustic sensors configured to measure configured to detect or identify objects in the geographical region 204 (i.e., the first section), based on acoustic waves (i.e., sound waves) transmitted and received post reflection by the objects. The set of acoustic sensors 312 may also operate on the time-of-flight principle. The sixth set of sensor outputs may include, but is not limited to, a time of transmission of an acoustic wave, a wavelength and a frequency of the transmitted acoustic wave, a time of reception of a reflected acoustic wave, and a wavelength and a frequency of the reflected acoustic wave.

The set of IMU sensors 314 may include one or more IMU sensors configured to measure or determine an orientation of each sensor (e.g., the set of IR sensors 304) of the first through sixth sets of sensors 302-312. Orientation of a sensor may include, but is not limited to, an angular velocity of the sensor, a tilt angle of the sensor, a height at which the sensor is located from the ground surface, or the like. Each IMU sensor, of the set of IMU sensors 314, may include a set of accelerometers, a set of gyroscopes, a set of magnetometers, or the like for measuring the orientation of each of the first through sixth sets of sensors 302-312. In a non-limiting example, for measuring an orientation of one of the set of IR sensors 304, one of the set of IMU sensors 314 may be affixed to an electro-mechanical mount that is coupled or attached to the IR sensor. Sensor outputs of the IMU sensor affixed to the electro-mechanical mount may indicate an orientation of the electro-mechanical mount and, thereby, the orientation of the corresponding IR sensor. The set of IMU sensors 314 may generate a seventh set of sensor outputs indicative of an orientation of each sensor of the first through sixth sets of sensors 302-312.

Operation of the set of radar sensors 302, the set of IR sensors 304, the set of visible spectrum sensors 306, the set of laser rangefinder sensors 308, the set of LiDAR sensors 310, the set of acoustic sensors 312, and the set of IMU sensors 314 is well known to those of skill in the art. For the sake of brevity, the first edge node 102a is shown to include only seven types of sensors (i.e., the first through seventh sets of sensors 302-314). In other embodiments, the first edge node

102a may include other types of sensors (e.g., seismic sensors) without deviating from the scope of the disclosure.

For the sake of brevity, the first through fourth sets of sensor outputs are collectively referred to as “first plurality of sensor outputs”. The fifth through seventh sets of sensor outputs are collectively referred to as “second plurality of sensor outputs”.

The GNSS device 316 may include suitable logic, circuitry, interfaces, and/or code, executable by the circuitry, for determining a geographical position of the first edge node 102a. Geographical position of the first edge node 102a, at any time-instance, may include, but is not limited to, a latitude of the first edge node 102a, a longitude of the first edge node 102a, an elevation or altitude of the first edge node 102a, or the like. The GNSS device 316 may operate in conjunction with a satellite navigation system to determine a geo-spatial position of the first edge node 102a. Examples of the satellite navigation system may include, but are not limited to, global positioning system (GPS), global navigation satellite system (GLONASS), Galileo, Beidou, Navigation with Indian constellation (NAVIC), or the like.

The first processing circuitry 318 may include suitable logic, circuitry, interfaces, and/or code, executable by the circuitry, that may be configured to execute the instructions stored in the first memory 322 for processing the first through seventh sets of sensor outputs and assigning threat levels to any objects that are in the first section of the geographical region 204. Based on the processing of the first through seventh sets of sensor outputs, the first processing circuitry 318 may be configured to detect an object (i.e., a moving object) in the first section and track a movement of the object within the geographical region. The first processing circuitry 318 may detect a current position and velocity of the detected object and forecast a trajectory of the detected object based on the first through seventh sets of sensors outputs. Further, the first processing circuitry 318 may recognize the detected object or classify the object into one of many types (e.g., human, animal, vehicle, or the like) using a set of image processing techniques. Consequently, the first processing circuitry 318 may be configured to determine an intent of the detected object and assign a threat level to the detected object based on the determined intent. The first processing circuitry 318 may be further configured to initiate the threat alert procedure based on the threat level assigned to the detected object.

The first processing circuitry 318 may be implemented by one or more processors, such as, but not limited to, an application-specific integrated circuit (ASIC) processor, a reduced instruction set computing (RISC) processor, a complex instruction set computing (CISC) processor, and a field-programmable gate array (FPGA) processor. The one or more processors may also correspond to central processing units (CPUs), graphics processing units (GPUs), network processing units (NPUs), digital signal processors (DSPs), or the like. It will be apparent to a person of ordinary skill in the art that the first processing circuitry 318 may be compatible with multiple operating systems. The first machine learning engine 320 may include suitable logic, circuitry, interfaces, and/or code, executable by the circuitry, that may be configured to execute or utilize machine learning models for detection of an object (e.g., a moving object), determination of the intent of the detected object, and threat assessment of the detected object. Examples of the first machine learning engine 320 may include, but are not limited to, an ASIC processor, a RISC processor, a CISC processor, and an FPGA processor. The first machine learning engine 320 may also correspond to a CPU, a GPU, an NPU, a DSP, or the like. It will be apparent to a person of ordinary skill in the art that the first machine learning engine 320 may be compatible with multiple operating systems. Further, the first machine learning engine 320 may implement any suitable machine-learning techniques, statistical techniques, or probabilistic techniques for performing the one or more operations associated with execution of various machine learning models (shown in FIG. 5) for the detection of the object, prediction of the intent of the detected object, and the threat assessment of the detected object.

For the sake of brevity, the first machine learning engine 320 is shown to be separate from the first processing circuitry 318. However, it will be apparent to those of skill in the art that the first machine learning engine 320 may be integrated with the first processing circuitry 318, without deviating from the scope of the disclosure. In such a scenario, the first processing circuitry 318 may perform all functions of the first machine learning engine 320.

The first memory 322 may include suitable logic, circuitry, and interfaces that may be configured to store one or more instructions which when executed by the first processing circuitry 318 cause the first processing circuitry 318 to perform various operations pertaining to the detection of the object (i.e., moving object) and the threat assessment of the object. The first memory 322 may further store therein a terrain database 328 that includes a terrain signature (i.e., topography information or terrain map) of the geographical region 204. The terrain signature of the geographical region 204 may include, but is not limited to, elevation of the geographical region 204, an altitude of the geographical region 204, a slope of the geographical region 204, a type of terrain (e.g., muddy, rocky, or grassy) of the geographical region 204, or the like. The terrain database 328 may be further configured to store an extent of foliage coverage in the geographical region 204, a type of foliage in the geographical region 204, current and forecasted weather conditions in the geographical region 204, current and forecasted visibility levels, or the like. In a non-limiting example, the terrain database 328 may store a virtual map (not shown) of the geographical region 204. The virtual map may represent the geographical region 204 as a grid partitioned into a plurality of segments (e.g., square segments of “900” square meters each). The virtual map may indicate a type of foliage in each segment, an extent of foliage coverage in each segment, a type of terrain in each segment, weather conditions in each segment, presence of a water body (e.g., lake or river) in each segment, a slope of terrain in each segment, an elevation (from sea level) of each segment, or the like. The terrain database 328 may be updated based on information received from a geospatial database. The terrain database 328 may be updated at regular intervals (e.g., daily, weekly, monthly, or the like). Examples of the first memory 322 may include, but are not limited to, a random-access memory (RAM), a read only memory (ROM), a removable storage drive, a hard disk drive (HDD), a flash memory, a solid-state memory, or the like. It will be apparent to a person skilled in the art that the scope of the disclosure is not limited to realizing the first memory 322 in the first edge node 102a, as described herein. In another embodiment, the first memory 322 may be realized in the form of a database or a cloud storage working in conjunction with the first edge node 102a, without departing from the scope of the disclosure.

The first network interface 324 may include suitable logic, circuitry, interfaces, and/or code, executable by the circuitry, that may be configured to enable the first edge node 102a to communicate with the remote node 104 and/or remaining edge nodes of the plurality of edge nodes 102. The first network interface 324 may be implemented as a hardware, software, firmware, or a combination thereof. Examples of the first network interface 324 may include a network interface card, a physical port, a network interface device, an antenna, a radio frequency transceiver, a wireless transceiver, an Ethernet port, a universal serial bus (USB) port, or the like.

In one embodiment, the remote node 104 and each of the plurality of edge nodes 102 may further include a time-keeping device 330. The time-keeping device 330 located in the remote node 104 and each of the plurality of edge nodes 102 may maintain time in accordance with a reference clock (e.g., an atomic clock). The timing-keeping device 330 may help maintain synchronization of time across the remote node 104 and the plurality of edge nodes 102. Any measurements (e.g., current position an object, current velocity of the object, a forecasted trajectory of the object, or the like) made by the remote node 104 and the plurality of edge nodes 102 may be in reference to a time maintained by the time-keeping device 330.

The remaining edge nodes 102b-102h may be structurally and functionally similar to the first edge node 102a. However, types of sensors and a number of sensors included in each edge node may vary. For example, the third edge node 102c may not include any acoustic sensors.

FIG. 4 is a block diagram that illustrates the first processing circuitry 318, in accordance with an exemplary embodiment of the present disclosure. The first processing circuitry 318 includes a sensor output processing engine 402, a sensor management engine 404, a threat management engine 406, and a routing engine 408. The sensor output processing engine 402, the sensor management engine 404, the threat management engine 406, and the routing engine 408 communicate with each other by way of a second communication bus 410.

The sensor output processing engine 402 may include suitable logic, circuitry, interfaces, and/or code, executable by the circuitry, that may be configured to process the first through seventh sets of sensors outputs received from the first and second pluralities of sensors. In a non-limiting example, the sensor output processing engine 402 may process or analyze each of the first through sixth sets of sensor outputs to determine if there are any significant deviations in a corresponding set of sensor outputs. A deviation in values of sensor outputs generated by a sensor may be due to detection of a moving object by a corresponding sensor. In a non-limiting example, sensor outputs (e.g., a time-series) generated by a first radar sensor of the set of radar sensors 302 may remain largely constant until a moving object is detected by the first radar sensor. The sensor outputs generated by the first radar sensor may deviate significantly, following the reception of the radio waves reflected by the object. The sensor output processing engine 402 may process the sensor outputs generated by the first radar sensor and determine that the sensor outputs have deviated by a significant amount. Based on the determination that the sensor outputs have deviated by a significant amount, the sensor output processing engine 402 may communicate the sensor outputs of the first radar sensor to the first machine learning engine 320 for determining a likelihood of a presence of a moving object in the geographical region 204 (i.e., the first section). The sensor management engine 404 may include suitable logic, circuitry, interfaces, and/or code, executable by the circuitry, that may be configured to determine and adjust a current orientation of each of the first through sixth sets of sensors 302-312. The current orientation may be determined based on the seventh set of sensor outputs received from the set of IMU sensors 314. The sensor management engine 404 may adjust an orientation of one or more sensors of the first through sixth sets of sensors 302-312, when the sensor output processing engine 402 determines that a likelihood of a moving object being present in the first section exceeds a first threshold value. In a non-limiting example, based on the determination that the likelihood of a moving object being present in the first section exceeds the first threshold value, the sensor management engine 404 may adjust an orientation of the set of LiDAR sensors 310 to focus on in a direction in which the object is likely to be present.

The threat management engine 406 may include suitable logic, circuitry, interfaces, and/or code, executable by the circuitry for initiating threat alert procedures and performing threat assessments of objects detected in the geographical region 204 (i.e., the first section). The threat management engine 406 may operate in conjunction with the first machine learning engine 320 to determine intent of the detected objects and assigning threat levels to the detected objects. Based on the assignment of the threat levels to the detected objects, the threat management engine 406 may generate threat alerts and communicate the threat alerts to the remote node 104 and/or the plurality of user devices 106. The routing engine 408 may include suitable logic, circuitry, interfaces, and/or code, executable by the circuitry, for management of routing by the first edge node 102a. The routing engine 408 may maintain a routing table. The routing table may be stored in the first memory 322 and may indicate network routes to various destinations on the edge node network. For example, the routing table may indicate that the fifth edge node 102e may be reached from the first edge node 102a by way of the second edge node 102b. The routing engine 408 may route or transmit messages (e.g., threat alerts), instructions, or information (e.g., the terrain signature) received by the first edge node 102a to corresponding destinations based on information stored in the routing table. FIG. 5 is a block diagram that illustrates the remote node 104, in accordance with an exemplary embodiment of the present disclosure. The remote node 104 includes second processing circuitry 502, a second memory 504, a second machine learning engine 506, a display device 508, and a second network interface 510. The second processing circuitry 502, the second memory 504, the second machine learning engine 506, the display device 508, and the second network interface 510 may communicate with each other by way of a third communication bus 512.

The second processing circuitry 502 may include suitable logic, circuitry, interfaces, and/or code, executable by the circuitry, that may be configured to execute the instructions stored in the second memory 504 for managing the edge node network. The second processing circuitry 502 may be further configured to receive threat alerts and the terrain signature of the geographical region 204 from the plurality of edge nodes 102. The second processing circuitry 502 may communicate the received threat alerts to a set of user devices of the plurality of user devices 106 based on a threat level associated with each of the received threat alerts and an authorization level of each user device of the set of user devices. The second processing circuitry 502 performs its various function by way of an edge node network management engine 514, a threat alert routing engine 516, and a simulation engine 518.

The second memory 504 may include suitable logic, circuitry, and interfaces that may be configured to store one or more instructions which when executed by the second processing circuitry 502 cause the second processing circuitry 502 to perform various operations pertaining to the management of the edge node network and the communication of the received threat alerts to one or more user devices of the plurality of user devices 106. The second memory 504 may further store therein an escalation matrix 520. The escalation matrix 520 may be a look-up table that maps threat levels to the authorization levels associated with the plurality of user devices 106. In other words, the escalation matrix 520 may indicate which threat alerts are to be communicated to which user devices, of the plurality of user devices 106. In a non-limiting example, the escalation matrix 520 may indicate that threat alerts associated with the threat levels “1-3” are to be communicated to the first through third user devices 106a- 106c that are associated with the first authorization level. Similarly, the escalation matrix 520 may indicate that threat alerts associated with the threat levels “4” and “5” are to be communicated to the fourth user device 106d that is associated with the second authorization level. Similarly, the escalation matrix 520 may indicate that threat alerts associated with the threat level “6” are to be communicated to the fifth user device 106e that is associated with the third authorization level. The second memory 504 may further store details of the plurality of edge nodes 102. For example, the second memory 504 may store an identifier of each edge node of the plurality of edge nodes 102, a configuration or topology of the edge node network, types of sensors and number of sensors located in each edge node of the plurality of edge nodes 102. The second memory 504 may further store, therein, the terrain signature of the geographical region 204. The terrain signature of the geographical region 204 may be received by the remote node 104 from any of the plurality of edge nodes 102 (e.g., the first edge node 102a).

The second machine learning engine 506 may include suitable logic, circuitry, interfaces, and/or code, executable by the circuitry, for training machine learning models to detect objects (e.g., a moving object), determine intent of the detected objects, and determine threat levels of the detected objects. Examples of the second machine learning engine 506 may include, but are not limited to, an ASIC processor, a RISC processor, a CISC processor, and an FPGA processor. The second machine learning engine 506 may also correspond to a CPU, a GPU, an NPU, a DSP, or the like. It will be apparent to a person of ordinary skill in the art that the second machine learning engine 506 may be compatible with multiple operating systems. Further, the second machine learning engine 506 may implement any suitable machine-learning techniques, statistical techniques, or probabilistic techniques for performing the one or more operations associated with training of various machine learning models (shown in FIG. 5) for the detection of the object, prediction of the intent of the detected object, and the threat assessment of the detected object. The second machine learning engine 506 is explained in conjunction with FIG. 6. For the sake of brevity, the second machine learning engine 506 is shown to be separate from the second processing circuitry 502. However, it will be apparent to those of skill in the art that the second machine learning engine 506 may be integrated with the second processing circuitry 502, without deviating from the scope of the disclosure. In such a scenario, the second processing circuitry 502 may perform all functions of the second machine learning engine 506.

The edge node network management engine 514 includes may include suitable logic, circuitry, interfaces, and/or code, executable by the circuitry, for managing the edge node network (i.e., the plurality of edge nodes 102). The edge node network management engine 514 may be configured to receive details (e.g., an identifier, types of sensors installed, number of sensors installed, geographical coordinates, or the like) of any edge node that is installed or deployed in the geographical region 204. The edge node network management engine 514 may store the details of the plurality of edge nodes 102 in the second memory 504. The edge node network management engine 514 may add or remove any edge nodes from the edge node network.

The threat alert routing engine 516 may include suitable logic, circuitry, interfaces, and/or code, executable by the circuitry, for routing or communicating any received threat alert to appropriate user devices. Upon receiving a threat alert, the threat alert routing engine 516 may identify a threat level associated with the threat alert. Based on the identified threat level and information stored in the escalation matrix 520, the threat alert routing engine 516 may select a set of user devices of the plurality of user devices 106 for receiving the threat alert. The threat alert routing engine 516 may communicate the threat alert to the selected set of user devices of the plurality of user devices 106.

The simulation engine 518 may include suitable logic, circuitry, interfaces, and/or code, executable by the circuitry, for generating a virtual environment based on a threat alert received by the remote node 104. The received threat alert may be indicative of an entity identifier of a detected object, a current position of the detected object, a current velocity of the detected object, and a forecasted trajectory (i.e., a series of forecasted positions and a series of forecasted velocities that correspond to a set of future time- instance) of the detected object. Based on the received threat alert and the terrain signature of the geographical region 204, the simulation engine 518 may generate a virtual environment. The generated virtual environment may be one of a two-dimensional environment, a three-dimensional environment, a virtual reality (VR) environment, or an augmented reality (AR) environment. The generated virtual environment may include a digital version of the geographical region 204. The digital version of the geographical region 204 may be generated in the virtual environment using the terrain signature of the geographical region 204. The generated virtual environment may further include a digital version of the detected object associated with the received threat alert (i.e., associated with the entity identifier included in the received threat alert). The generated virtual environment may include the digital version of the detected object and the forecasted trajectory of the detected object overlaid on the digital version of the geographical region 204 based on the current position the detected object. The generated environment may indicate, in real-time or near real-time, a movement of the detected object through the digital version of the geographical region 204 based on the forecasted trajectory of the detected object.

The display device 508 may include suitable logic, circuitry, interfaces, and/or code, executable by the circuitry, for rendering or displaying the virtual environment generated by the simulation engine 518. The generated virtual environment may be displayed (i.e., rendered) on a display screen of the display device 508. Examples of the display device 508 include, but are not limited, monitors, VR headsets, televisions, mobile phones, laptops, personal computers, tablets, phablets, or the like. The second network interface 510 may include suitable logic, circuitry, interfaces, and/or code, executable by the circuitry, that may be configured to enable the remote node 104 to communicate with the plurality of edge nodes 102 and/or the plurality of user devices 106. The network interface 510 may be implemented as a hardware, software, firmware, or a combination thereof. Examples of the network interface 510 may include a network interface card, a physical port, a network interface device, an antenna, a radio frequency transceiver, a wireless transceiver, an Ethernet port, a universal serial bus (USB) port, or the like.

For the sake of brevity, in the current embodiment it is assumed that the second memory 504 stores the escalation matrix 520. In another embodiment, the plurality of edge nodes 102 may also store the escalation matrix 520. For example, the first memory 322 may store the escalation matrix 520. In such a scenario, upon generation of a threat alert, the first processing circuitry 318 may select a set of user devices of the plurality of user devices 106 for receiving the generated threat alert based on an authorization level of each of the selected set of user devices. The first processing circuitry 318 may directly communicate the generated threat alert to the selected set of user devices. FIG. 6 is a block diagram that illustrates the second machine learning engine 506, in accordance with an exemplary embodiment of the present disclosure. The second machine learning engine 506 is shown to store, therein, first through ninth machine learning models 602a-602i. Examples of the first through ninth machine learning models 602a-602i may include, but are not limited to, deep neural networks, convolutional neural networks, long short-term memory networks, or the like. Examples of the first through ninth machine learning models 602a-602i may further include an ensemble model that is a combination of aforementioned machine learning models.

In a non-limiting example, the first through sixth machine learning models 602a-602f may be used to predict a likelihood of an object being present in the geographical region 204 (i.e., in the first section) based on the first through sixth sets of sensor inputs, respectively. For example, the first machine learning model 602a may be trained to determine (i.e., predict) a likelihood or probability of a moving object being present in the first section based on the first set of sensor outputs generated by the set of radar sensors 302.

For training the first through sixth machine learning models 602a-602f, the second machine learning engine 506 may use feature selection and/or feature engineering techniques to analyze a first training dataset pertaining to detection of objects in the geographical region 204. The first training dataset may include data indicating sensor outputs generated by the set of radar sensors 302 at various historical time-instances. The first training dataset may further indicate whether an object (e.g., a moving object) was detected at each of the various historical time-instances. In other words, the first training dataset may be indicative of a relationship or correlation between sensor outputs generated by the set of radar sensors 302 and detection of objects (i.e., moving objects). Based on the analysis of the first training dataset, the second machine learning engine 506 may determine a first plurality of features that affect determination of a likelihood (i.e., probability) of a presence of an object in the geographical region 204. Each of the first plurality of features may have a high degree of correlation or a causal relationship with the probability of a presence of an object in the geographical region 204. The feature or variable selection techniques may include various statistical techniques such as, but not limited to, Theil’s U, Spearman’s correlation, Pearson’s correlation, variance inflation factor, analysis of variance (ANOVA), or logarithmic scaling. Each feature of the first plurality of features may be associated with a corresponding weight that is indicative of an extent to which the feature is correlated with the likelihood of an object being present in the geographical region 204. In a non-limiting example, the first plurality of features may include a time difference between transmission of radio wave and reception of reflected wave. The first plurality of features may further include a difference in frequencies of transmitted and received waves, an orientation of each radar sensor of the set of radar sensors 302, a type of foliage in the geographical region 204, a distribution or extent of foliage in the geographical region 204, weather conditions, or the like. The second machine learning engine 506 may determine a first plurality of feature values for the first plurality of features based on the first training dataset. The second machine learning engine 506 may then train the first machine learning model 602a to determine or predict a likelihood of an object being present in the geographical region 204 based on the first plurality of feature values.

Similarly, the second machine learning model 602b may be trained to determine a likelihood or probability of an object being present in the geographical region 204 based on the second set of sensor outputs generated by the set of IR sensors 304. The first training dataset may further include, data indicating sensor outputs generated by the set of IR sensors 304, at various historical time-instances. The first training dataset may further indicate whether an object (e.g., a moving object) was detected at each of the various historical time-instances. In other words, the first training dataset may be indicative of a relationship or correlation between sensor outputs generated by the set of IR sensors 304 and detection of objects (i.e., moving objects). For training the second machine learning model 602b, the second machine learning engine 506 may analyze the first training dataset to determine (i.e., predict) a second plurality of features that affect determination of a likelihood (i.e., probability) of a presence of an object in the geographical region 204. The second machine learning engine 506 may determine, based on information included in the first training dataset, a second plurality of feature values for the second plurality of features. In a non-limiting example, the second plurality of features may include a heat map indicated by sensor outputs generated by the set of IR sensors 304, an orientation of each IR sensor of the set of IR sensors 304, a type of foliage in the geographical region 204, a distribution or extent of foliage in the geographical region 204, weather conditions, or the like. The second machine learning engine 506 may train the second machine learning model 602b to determine or predict a likelihood of an object being present in the geographical region 204 based on the second plurality of feature values. Similarly, the third machine learning model 602c may be trained to determine a likelihood or probability of an object being present in the geographical region 204 based on the third set of sensor outputs generated by the set of visible spectrum sensors 306. The first training dataset may further include data indicating sensor outputs generated by the set of visible spectrum sensors 306 at various historical time-instances. The first training dataset may further indicate whether an object (e.g., a moving object) was detected at each of the various historical time- instances. In other words, the first training dataset may be indicative of a relationship or correlation between sensor outputs generated by the set of visible spectrum sensors 306 and detection of objects (i.e., moving objects). For training the third machine learning model 602c, the second machine learning engine 506 may analyze the first training dataset to determine (i.e., predict) a third plurality of features that affect determination of a likelihood (i.e., probability) of a presence of an object in the geographical region 204. The second machine learning engine 506 may determine, based on the information included in the first training dataset, a third plurality of feature values for the third plurality of features. The second machine learning engine 506 may train the third machine learning model 602c to determine or predict a likelihood of an object being present in the geographical region 204, based on the third plurality of feature values.

Similarly, the fourth machine learning model 602d may be trained to determine a likelihood or probability of an object being present in the geographical region 204 based on the fourth set of sensor outputs generated by the set of laser rangefinder sensors 308. The first training dataset may further include data indicating sensor outputs generated by the set of laser rangefinder sensors 308 at various historical time-instances. The first training dataset may further indicate whether an object (e.g., a moving object) was detected at each of the various historical time- instances. In other words, the first training dataset may be indicative of a relationship or correlation between sensor outputs generated by the set of laser rangefinder sensors 308 and detection of objects (i.e., moving objects).

For training the fourth machine learning model 602d, the second machine learning engine 506 may analyze the first training dataset to determine (i.e., predict) a fourth plurality of features that affect determination of a likelihood (i.e., probability) of a presence of an object in the geographical region 204. In a non-limiting example, the fourth plurality of features may include time difference between transmission of a laser beam and reception of the laser beam post reflection, an orientation of each laser rangefinder sensor of the set of laser rangefinder sensors 308, a type of foliage in the geographical region 204, a distribution or extent of foliage in the geographical region 204, weather conditions, or the like. The second machine learning engine 506 may determine, based on the information included in the first training dataset, a fourth plurality of feature values for the fourth plurality of features. The second machine learning engine 506 may train the fourth machine learning model 602d to determine or predict a likelihood of an object being present in the geographical region 204 based on the fourth plurality of feature values.

Similarly, the fifth machine learning model 602e may be trained to determine a likelihood or probability of an object being present in the geographical region 204, based on the fifth set of sensor outputs generated by the set of LiDAR sensors 310. The first training dataset may further include, data indicating sensor outputs, generated by the set of LiDAR sensors 310, at various historical time-instances. The first training dataset may further indicate whether an object (e.g., a moving object) was detected at each of the various historical time-instances. In other words, the first training dataset may be indicative of a relationship or correlation between sensor outputs generated by the set of LiDAR sensors 310 and detection of objects (i.e., moving objects).

For training the fifth machine learning model 602e, the second machine learning engine 506 may analyze the first training dataset to determine (i.e., predict) a fifth plurality of features that affect determination of a likelihood (i.e., probability) of a presence of an object in the geographical region 204. In a non-limiting example, the fifth plurality of features may include time difference between transmission of a laser light and reception of the laser light post reflection, an orientation of each LiDAR sensor of the set of LiDAR sensors 310, a type of foliage in the geographical region 204, a distribution or extent of foliage in the geographical region 204, weather conditions, or the like. The second machine learning engine 506 may determine, based on the information included in the first training dataset, a fifth plurality of feature values for the fifth plurality of features. The second machine learning engine 506 may train the fifth machine learning model 602e to determine or predict a likelihood of an object being present in the geographical region 204 based on the fifth plurality of feature values. Similarly, the sixth machine learning model 602f may be trained to determine a likelihood or probability of an object being present in the geographical region 204 based on the sixth set of sensor outputs generated by the set of acoustic sensors 312. The first training dataset may further include data indicating sensor outputs generated by the set of acoustic sensors 312, at various historical time-instances. The first training dataset may further indicate whether an object (e.g., a moving object) was detected at each of the various historical time-instances. In other words, the first training dataset may be indicative of a relationship or correlation between sensor outputs generated by the set of acoustic sensors 312 and detection of objects (i.e., moving objects).

For training the sixth machine learning model 602f, the second machine learning engine 506 may analyze the first training dataset to determine (i.e., predict) a sixth plurality of features that affect determination of a likelihood (i.e., probability) of a presence of an object in the geographical region 204. In a non-limiting example, the sixth plurality of features may include a time difference between transmission of an acoustic wave light and reception of the acoustic wave post reflection, an orientation of each acoustic sensor of the set of acoustic sensors 312, or the like. The second machine learning engine 506 may determine, based on the information included in the first training dataset, a sixth plurality of feature values for the sixth plurality of features. The second machine learning engine 506 may train the sixth machine learning model 602f to determine or predict a likelihood of an object being present in the geographical region 204 based on the sixth plurality of feature values. For the sake of brevity, it is assumed that the first through sixth machine learning models 602a- 602f are trained, based on sensor outputs of sensors (e.g., the first through sixth sets of sensors 302-312) included in the first edge node 102a. However, it will be apparent to those of skill in the art that the first through sixth machine learning models 602a-602f may be trained, based on sensor outputs of all sensors included in the plurality of edge nodes 102. The second machine learning engine 506 may further train the seventh machine learning model 602g for recognition and identification of detected objects. The seventh machine learning model 602g may be trained to recognize and identify objects detected within the geographical region 204. For example, the seventh machine learning model 602g may be trained to recognize a detected object as one of a human, an animal, a vehicle, or the like. In some embodiments, the seventh machine learning model 602g may be further trained to determine various attributes of a recognized object. For example, the seventh machine learning model 602g may be trained to determine an age group (e.g., child, teenager, or adult), an ethnicity (e.g., Asian or Caucasian), a gender (e.g., male or female) of an object that has been recognized as a human. Similarly, the seventh machine learning model 602g may be trained to determine a type (e.g., a two- wheel or a four-wheel vehicle) of an object that has been recognized as vehicle. Similarly, the seventh machine learning model 602g may be trained to determine a type (e.g., a dog, a cow, or a horse) of an object that has been recognized as animal. It will be apparent to those of skill in the art that the determination of the various attributes of a recognized object is not restricted to aforementioned attributes. Any attribute of a recognized object may be determined without deviating from the scope of a disclosure. For example, based on the recognition of a detected object as a human, the seventh machine learning model 602g may be trained to determine a type of attire worn by the human, enabling the seventh machine learning model 602g or the second machine learning engine 506 to determine whether the human is an employee associated with the secure area 202.

In a non-limiting example, the seventh machine learning model 602g may be trained using a second training dataset. The second training dataset may include tagged images or videos of various objects (e.g., humans, vehicles, animals, or the like). For training the seventh machine learning model 602g, the second machine learning engine 506 may analyze the second training dataset to determine a seventh plurality of features (i.e., image descriptors or video descriptors) that affect recognition and identification of detected objects. In a non-limiting example, the seventh plurality of features (i.e., the image descriptors or video descriptors) may correspond to a shape of an object in an image, a color of an object in an image, a texture of an object in an image, a motion of an object in a video, a heat signature of an object as indicated by sensor outputs from the set of IR sensors 304, or the like. Examples of the image descriptors include, but are not limited to, dominant color descriptors, scalable color descriptors, color structure descriptors, texture browsing descriptors, edge histogram descriptors, or the like. Based on the second training dataset, the second machine learning engine 506 may determine a seventh plurality of feature values for the seventh plurality of features. The second machine learning engine 506 may train the seventh machine learning model 602g to recognize detected objects, based on the seventh plurality of feature values. The second machine learning engine 506 may further train the eighth machine learning model 602h for determining intent of detected objects. The eighth machine learning model 602h may be trained to determine an intent of any object (i.e., moving object) detected within the geographical region 204. In a non-limiting example, the eighth machine learning model 602h may be trained using a third training dataset. The third training dataset may include a first set of parameters corresponding to a set of objects. In a non-limiting example, the third training dataset may include a type (e.g., human, vehicle, animal, or the like) of an object, a current position of an object, a current velocity of an object, a forecasted trajectory of an object, or the like. The third training dataset may include data that correlates the first set of parameters with a type of intent of an object. In other words, the third training dataset may tag objects in various scenarios with a type of intent. For example, a vehicle (i.e., type of object or first parameter) travelling at a high velocity (i.e., current velocity or second parameter) towards the secure area 202 may be tagged with the first type of intent (i.e., suspicious). Similarly, an adult male human (i.e., type of object or first parameter) standing (i.e., current position or third parameter) far away from the secure area 202 and travelling away from the secure area 202 may be tagged with the second type of intent (i.e., non-suspicious). Similarly, a human child walking towards the secure area 202 at a low velocity may be tagged with the second type of intent. For training the eighth machine learning model 602h, the second machine learning engine 506 may analyze the third training dataset to determine an eighth plurality of features that affect an intent of an object. The eighth plurality of features may pertain to the first set of parameters (e.g., the first through third parameters). In a non-limiting example, the eighth plurality of features may correspond to a type of an object, a current velocity of an object, a time at which an object is detected in the geographical region 204, a forecasted future velocity of an object, a forecasted future position of an object, a current position of an object, or the like. The eighth machine learning model 602h may leam a correlation between a type of intent and the first set of parameters (i.e., the first plurality of features). In other words, the eighth machine learning model 602h may learn to corelate values of the first set of parameters with an intent of an object. Based on the third training dataset, the second machine learning engine 506 may determine an eighth plurality of feature values for the eighth plurality of features. The second machine learning engine 506 may train the eighth machine learning model 602h to determine an intent of an object as one of the first type of intent (i.e., suspicious) or the second type of intent (i.e., non-suspicious) based on the eighth plurality of feature values.

In an example, the trained eighth machine learning model 602h may determine that an object (e.g., the moving object) has the first type of intent (i.e., suspicious), if the object is recognized or identified as a human male that is moving at a high velocity (e.g., 10 meters/second) towards the secure area 202 (i.e., forecasted position).

In another example, the trained eighth machine learning model 602h may determine that an object (e.g., the moving object) has the second type of intent (i.e., non-suspicious), if the object is recognized or identified as a human male that is moving at a high velocity (e.g., 10 meters/second) away from the secure area 202 (i.e., forecasted position).

In another example, the trained eighth machine learning model 602h may determine that an object (e.g., the moving object) has the first type of intent (i.e., suspicious), if the object is recognized or identified as a four-wheel vehicle that is moving at a high velocity (e.g., 18 meters/second) towards the secure area 202 (i.e., forecasted position). In another example, the trained eighth machine learning model 602h may determine that an object (e.g., the moving object) has the second type of intent (i.e., non-suspicious), if the object is recognized or identified as a two- wheel vehicle that is moving at a moderate velocity (e.g., 6 meters/second) away from the secure area 202 (i.e., forecasted position).

In another example, the trained eighth machine learning model 602h may determine that an object (e.g., the moving object) has the second type of intent (i.e., non-suspicious), if the object is recognized or identified as a human child.

In another example, the trained eighth machine learning model 602h may determine that an object (e.g., the moving object) has the second type of intent (i.e., non-suspicious), if the object is recognized or identified as a local animal (e.g., a cow or a horse) moving at a low velocity (e.g., 2 meters/second).

In another example, the trained eighth machine learning model 602h may determine that an object (e.g., the moving object) has the second type of intent (i.e., non-suspicious), if the object is detected as an employee associated with the secure area 202. It will be apparent to a person of ordinary skill in the art that the abovementioned examples are described for exemplary purposes and should not be construed limiting to the scope of the disclosure.

The second machine learning engine 506 may further train the ninth machine learning model 602i for threat assessment of detected objects. The ninth machine learning model 602i may be trained to determine a threat level for any object (i.e., moving object) detected within the geographical region 204. In a non-limiting example, the ninth machine learning model 602i may be trained using a fourth training dataset. The fourth training dataset may include a second set of parameters corresponding to a set of objects. In a non-limiting example, the fourth training dataset may include a type (e.g., human, vehicle, animal, or the like) of an object, a determined intent of an object, a current position of an object, a current velocity of an object, a forecasted trajectory of an object, or the like. In other words, the fourth training dataset may tag objects in various scenarios with a threat level. For example, a vehicle (i.e., type of object or first parameter) with the first type of intent (i.e., fourth parameter) and travelling at a high velocity (i.e., current velocity or second parameter) towards the secure area 202 may be tagged with a high threat level (e.g., “5”). Similarly, an adult male human (i.e., type of object or first parameter) with the second type of intent (i.e., fourth parameter) standing (i.e., current position or third parameter) far away from the secure area 202 and travelling away from the secure area 202 may be tagged with a low threat level (e.g., “0”). The fourth training dataset may include data that correlates the second set of parameters with a threat level that is to be determined for the object (i.e., assigned to the object). For training the ninth machine learning model 602i, the second machine learning engine 506 may analyze the fourth training dataset to determine a ninth plurality of features that affect a threat level of an object.

The ninth plurality of features may pertain to the second set of parameters (e.g., the first through fourth parameters). In a non-limiting example, the ninth plurality of features may correspond to a type of an object, a determined intent of an object, a current velocity of an object, a time at which an object is detected in the geographical region 204, a forecasted future velocity of an object, a forecasted future position of an object, a current position of an object, or the like. The ninth machine learning model 602i may learn a correlation between threat levels and the second set of parameters (i.e., the ninth plurality of features). In other words, the eighth machine learning model 602h may learn to corelate values of the second set of parameters with a threat level of an object. Based on the fourth training dataset, the second machine learning engine 506 may determine a ninth plurality of feature values for the ninth plurality of features. The second machine learning engine 506 may train the ninth machine learning model 602i to determine a threat level for a detected object based on the ninth plurality of feature values. The determined threat level for the detected object may correspond to one of a plurality of threat levels. As described in the foregoing description of FIG. 1 , the plurality of threat levels may include seven threat levels that range from “0” to “6”. A threat level of “0” assigned to a detected object may indicate that the detected object not deemed a threat. A threat level of “6” assigned to a detected object may indicate that the detected object is an urgent threat and that the detected object needs to be dealt with immediately.

In an example, the ninth machine learning model 602i may determine a low threat level (e.g., “0”) for an object (e.g., the moving object), if the object is recognized or identified as a human male with the second type of intent and the object is moving at a high velocity (e.g., 10 meters/second) away from the secure area 202 (i.e., forecasted position).

In another example, the trained ninth machine learning model 602i may determine a low threat level (e.g., “3”) for an object (e.g., the moving object), if the object is recognized or identified as a human female with the first type of intent and the object is moving at a high velocity (e.g., 10 meters/second) towards the secure area 202 (i.e., forecasted position). In another example, the trained ninth machine learning model 602i may determine a high threat level (e.g., “5”) for an object (e.g., the moving object), if the object is recognized or identified as a four-wheel vehicle with the first type of intent and the object is moving at a high velocity (e.g., 18 meters/second) towards the secure area 202 (i.e., forecasted position). It will be apparent to a person of ordinary skill in the art that the abovementioned examples are described for exemplary purposes and should not be construed limiting to the scope of the disclosure.

Following the training of the first through ninth machine learning model 602a-602i, the second processing circuitry 502 may communicate, to the first processing circuitry 318 or the first machine learning engine 320, a set of weights (e.g., neuron weights) corresponding to trained the first through ninth machine learning models 602a-602i. The first machine learning engine 320 may execute or implement a local version of each of the trained first through ninth machine learning models 602a-602i, based on the set of weights. For the sake of brevity, the local versions of the trained first through ninth machine learning models 602a-602i are simply referred to as “the trained first through ninth machine learning models 602a- 602i” or “the first through ninth machine learning models 602a- 602 i”.

FIGS. 7A-7F, collectively, represent a process flow diagram 700 that illustrates detection and threat assessment of an object in the geographical region 204, in accordance with an exemplary embodiment of the present disclosure. The process flow diagram 700 involves the first and second pluralities of sensors, the GNSS device 316, the first processing circuitry 318, the first machine learning engine 320, the first memory 322, and the first network interface 324. FIGS. 7A-7F are described in conjunction with FIGS. 1-6. For the sake of brevity, it is assumed that the first through ninth machine learning models 602a-602i are already trained by the second machine learning engine 506 and that the first machine learning engine 320 executes or implements the trained first through ninth machine learning models 602a-602i. Referring to FIG. 7A, the first through fifth sets of sensors 302-310 scan an environment (i.e., monitor the geographical region 204) or the geographical region 204 surrounding the first edge node 102a (as shown by arrows 702a-702e). For example, the set of radar sensors 302 may transmit radio waves and receive radio waves that may be reflected back to the set of radar sensors 302. Similarly, the set of IR sensors 304 may detect IR radiation emitted by objects in the geographical region 204. Hereinafter, the environment surrounding the first edge node 102a is designated and referred to as “surrounding environment”. The term “surrounding environment” and the term “first section” are used interchangeably throughout the disclosure. Similarly, the set of visible spectrum sensors 306 may scan the surrounding environment (i.e., monitor the geographical region 204). It will be apparent to those of skill in the art that the set of laser rangefinder sensors 308 and the set of LiDAR sensors 310 scan the surrounding environment in a similar manner.

The first through fifth sets of sensors 302-310 generate the first through fifth sets of sensor outputs, respectively, based on the monitoring of the geographical region 204 (as shown by arrows 704a-704e). In a non-limiting example, sensor outputs generated by a sensor (e.g., the set of radar sensors 302) may include raw data generated by a corresponding sensor. For example, the first set of sensor outputs, generated by the set of radar sensors 302, may include a time- series that is indicative of a time associated with a transmission of each radio wave and a time associated with a reception of each reflected radio wave. The first set of sensor outputs may further include a frequency and a wavelength of each transmitted radio wave and a frequency and a wavelength of each received radio wave. Similarly, the second set of sensor outputs, generated by the set of IR sensors 304, may be indicative of heat energy or IR radiation detected by the set of IR sensors 304. In a non-limiting example, the second set of sensor outputs may include or be indicative of a heat map of the surrounding environment. The third set of sensor outputs, generated by the set of visible spectrum sensors 306, may include images or videos of the surrounding environment. The fourth set of sensor outputs, generated by the set of laser rangefinder sensors 308, may include a time-series that is indicative of a time associated with a transmission of each laser light beam and a time associated with a reception of each reflected laser light beam. The fifth set of sensor outputs, generated by the set of LiDAR sensors 310, may include a time-series that is indicative of a time associated with a transmission of each light beam and a time associated with a reception of each reflected light beam. The fifth set of sensor outputs may further include a frequency and a wavelength of each light wave transmitted and received by the set of LiDAR sensors 310.

The set of radar sensors 302, the set of IR sensors 304, the set of visible spectrum sensors 306, the set of laser rangefinder sensors 308, and the set of LiDAR sensors 310 may communicate the generated first through fifth sets of sensor outputs, respectively, to the first processing circuitry 318 (as shown by arrows 706a-706e). In other words, the sensor output processing engine 402 receives the first through fifth sets of sensor outputs.

Referring to FIG. 7B, the set of acoustic sensors 312 may scan the surrounding environment (as shown by arrow 708). Similar to the set of radar sensors 302, the set of acoustic sensors 312 may generate a sixth set of sensor outputs (as shown by arrow 710). The sixth set of sensor outputs may include a time-series that is indicative of a time associated with each transmitted acoustic signal (i.e., wave) and a time associated with each received acoustic signal (i.e., wave). The sixth set of sensor outputs may further include a frequency and a wavelength of each transmitted and received acoustic signal.

The set of IMU sensors 314 may generate the seventh set of sensor outputs (as shown by arrow 712). The seventh set of sensor outputs, generated by the set of IMU sensors 314 may be indicative of an orientation of each sensor of the first through sixth sets of sensors 302-312. As described in the foregoing description of FIG. 3, an orientation of a sensor may include a height at which the sensor is located with respect to ground level, a tilt angle of the sensor with respect to a reference plane, or the like. The set of acoustic sensors 312 and the set of IMU sensors 314 may communicate the sixth and seventh sets of sensor outputs to the first processing circuitry 318 (as shown by arrows 714a and 714b). In other words, the sensor output processing engine

402 receives the sixth and seventh sets of sensor outputs.

The GNSS device 316 may communicate first location information of the first edge node 102a to the first processing circuitry 318 (as shown by arrow 716). The first location information may be indicative of geographical location of the first edge node 102a. The geographical location of the first edge node 102a may include, but is not limited to, a latitude, a longitude, and an altitude of the first edge node 102a. The first location information may further be indicative of a direction of magnetic north and a direction of non-magnetic north. The first memory 322 may communicate the terrain signature of the geographical region 204 (as shown by arrow 718). Terrain signature of the geographical region 204 is stored in the terrain database 328. In some embodiments, the communicated terrain signature may include only a terrain signature of the first section of the geographical region 204, and not the terrain signature of the entire geographical region 204.

In a non-limiting example, it is assumed that among the first through sixth sets of sensors 302- 312, the set of radar sensors 302 have a highest detection range. In a non-limiting example, detection range of a sensor may be defined as a maximum range within which the sensor may reliably detect objects. For example, the set of radar sensors 302 may have a detection range of “8” km. In other words, the set of radar sensors 302 may be able to receive radio waves reflected by objects that are located up to “8” km away from a location of the set of radar sensors 302 (i.e., the location of the first edge node 102a). Similarly, the set of IR sensors 304, the set of visible spectrum sensors 306, the set of laser rangefinder sensors 308, the set of LiDAR sensors 310, and the set of acoustic sensors 312 may have detection ranges of "4” km, “3” km, “3” km, “2” km, and “1” km, respectively. Therefore, it is likely that any moving object entering the surrounding environment of the first edge node 102a may be detected by the set of radar sensors 302 before any of the second through sixth sets of sensors 304-312. It will be apparent to those of skill in the art that detection ranges of various sensors mentioned above are merely exemplary and are not meant to limit the scope of the disclosure. In an actual implementation, detection ranges of the first through sixth sets of sensors 302-312 may vary.

The first processing circuitry 318 may communicate a first request to the first machine learning engine 320 (as shown by arrow 720). The first request may be a request for determining a likelihood of a presence of a moving object in the surrounding environment (i.e., the geographical region 204), based on the first set of sensor outputs of the set of radar sensors 302. The first request may include the first set of sensor outputs of the set of radar sensors 302. The first request may further include the terrain signature of the surrounding environment and the first location information of the first edge node 102a (i.e., the first location information of the set of radar sensors 302). The first machine learning engine 320 may determine a plurality of feature values for the first plurality of features associated with the first machine learning model 602a. The first machine learning engine 320 may provide as input, to the first machine learning model 602a, the determined plurality of feature values (as shown by arrow 722). Based on inputted plurality of feature values, the first machine learning model 602a may provide as output a likelihood or a probability of a presence of a moving object in the surrounding environment (i.e., the first section). In other words, the first machine learning engine 320 determines the likelihood or probability of a presence of a moving object in the surrounding environment of the first edge node 102a, based on the output of the first machine learning model 602a for the inputted plurality of feature values (as shown by arrow 724). In a non-limiting example, it is assumed that the first machine learning engine 320 determines that the likelihood or probability of a presence of a moving object in the surrounding environment is “0.8” (i.e., “80%”).

Therefore, the output of the first machine learning model 602a indicates that there is a high likelihood of a presence of a moving object in the surrounding environment. Referring now to FIG. 7C, the first machine learning engine 320 may communicate a first response to the first processing circuitry 318 (as shown by arrow 726). The first response is indicative of the determined likelihood or probability (i.e., “0.8”) of a presence of a moving object in the surrounding environment.

In some embodiments, the first processing circuitry 318 may communicate multiple requests to the first machine learning engine 320 for determination of a likelihood or probability of a presence of a moving object in the surrounding environment. Each request of the multiple requests may be a request for the determination of a likelihood or probability of a presence of a moving object based on a specific set of sensor outputs generated by a specific set of sensors. For example, the first processing circuitry 318 may further communicate another request (not shown) for determination of a likelihood of a presence of a moving object in the surrounding environment, based on the second set of sensor outputs generated by the set of IR sensors 304. The other request may include the second set of sensor outputs, the terrain signature of the surrounding environment, and the first location information. Based on the other request, the first machine learning engine 320 may determine, using the second machine learning model 602b, a likelihood of a presence of a moving object in the surrounding environment. Process of determination of the likelihood of a presence of a moving object using the second machine learning model 602b may be same as the process of the determination of likelihood using the first machine learning model 602a. Following a similar process, the first machine learning engine 320 may determine, using a corresponding machine learning model 602b-602f, a likelihood of a presence of a moving object, based on each of the third through sixth sets of sensor outputs.

However, in another embodiment, for reducing computing complexity, the first processing circuitry 318 may communicate only a single request (i.e., the first request) to the first machine learning engine 320 for the determination of a likelihood of a presence of a moving object, based on sensor outputs (e.g., the first set of sensor outputs) from a single set of sensors (e.g., the set of radar sensors 302). In such a scenario, the first processing circuitry 318 may utilize various probabilistic techniques (e.g., Bayesian inference techniques) to predict the determination outputs of the other machine learning models (e.g., the second through sixth machine learning models 602b-602f). For example, based on the likelihood determined by the first machine learning model 602a, the first processing circuitry 320 may predict the likelihood of a presence of a moving object that the second machine learning model 602b may determine based on the second set of sensor outputs. Similarly, based on the likelihood determined by the first machine learning model 602a, the first processing circuitry 320 may predict the likelihood of a presence of a moving object that the third machine learning model 602c may determine based on the third set of sensor outputs.

In some embodiments, the first processing circuitry 318 may compare the determined likelihood with a second threshold value (i.e., a threshold probability value). In a non-limiting example, it is assumed that the second threshold value is equal to “0.5”. If the determined likelihood or probability of the presence of a moving object exceeds the threshold probability value, the first processing circuitry 318 may determine that it is highly likely that a moving object is present in the surrounding environment (i.e., the moving object may be deemed detected by the set of radar sensors 302). Consequently, the first processing circuitry 318 (i.e., the sensor output processing engine 402) may process the first set of sensor outputs, generated by the set of radar sensors 302, to determine a direction where the object is likely to be present (as shown by arrow 730). For example, the first processing circuitry 318 may determine, based on the processing of the first set of sensor outputs, that the moving object is likely to be present to the north-west direction of the first edge node 102a. The first processing circuitry 318 may detect a current orientation of each sensor of the first through sixth sets of sensors 302-312 (as shown by arrow 732). The first processing circuitry 318 may detect the current orientation of each sensor of the first through sixth sets of sensors 302-312 based on the seventh set of sensor outputs received from the set of IMU sensors 314.

In a non-limiting example, the first processing circuitry 318 may detect that that the set of visible spectrum sensors 306 is currently oriented towards a direction other than the north-west direction of the first edge node 102a. Similarly, the first processing circuitry 318 may further detect that each of the first and second sets of sensors 302 and 304 and the fourth through sixth sets of sensors 308-312 are currently oriented towards the north-west direction of the first edge node 102a. Based on the detection of the current orientation of each of the first through sixth sets of sensors 302-312, the first processing circuitry 318 may communicate an orientation adjustment instruction to the set of visible spectrum sensors 306 (as shown by arrow 734). The orientation adjustment instruction may be indicative of a new orientation for each sensor of the set of visible spectrum sensors 306. In a non-limiting example, the orientation adjustment instruction may be indicative of a set of pan, tilt, and zoom adjustments to be performed by each sensor, of the set of visible spectrum sensors 306, to orient itself towards the north-west direction of the first edge node 102a where the moving object is likely to be present. The current orientation of each of the set of visible spectrum sensors 306 may be adjusted based on the orientation adjustment instruction (as described by arrow 736). In other words, the first processing circuitry 318 may adjust the current orientation of each sensor of the set of visible spectrum sensors 306 to focus in the north-west direction where the moving object is likely to be present. In another embodiment, the first processing circuitry 318 may communicate the orientation adjustment instruction to an electro-mechanical mount coupled to the set of visible spectrum sensors 306. A current orientation of the electro-mechanical mount may be adjusted based on the received orientation adjustment instruction, orienting the set of visible spectrum sensors 306 towards the north-west direction of the first edge node 102a. Following the adjustment of the current orientation of the set of visible spectrum sensors 306, a field of vision of the set of visible spectrum sensors 306 may now include the north-west direction of the first edge node 102a where the moving object is likely to be present.

The first processing circuitry 318 may continue to receive the first through seventh sets of sensor outputs from the first through seventh sets of sensors 302-314. In a non-limiting example, it is assumed that the moving object is moving towards the secure area 202 from the north-east direction of the first edge node 102a. For the sake of brevity, hereinafter the moving object is referred to as “first object” (not shown). As the first object approaches the first edge node 102a, the first object may enter a detection range of various other sensors (e.g., the set of IR sensors 304, the set of laser rangefinder sensors 308, the set of LiDAR sensors 310, or the like). For example, the first object may be detected by the set of IR sensors 304, and the set of laser rangefinder sensors 308. When the first object travels further towards the first edge node 102a, the first object may be detected by the set of LiDAR sensors 310, in addition to the detection by the set of radar sensors 302, the set of IR sensors 304, the set of visible spectrum sensors 306, and the set of laser rangefinder sensors 308. When the first object travels even further towards the first edge node 102a, the first object may be detected by the set of acoustic sensors 312, in addition to the detection by the first through fifth sets of sensors 302-310. For the sake brevity, it is assumed that the first object is currently detected by the first through sixth sets of sensors 302-312. In other words, the first through sixth sets of sensor outputs generated by the first through sixth sets of sensors 302-312 are indicative of the presence of the first object in the surrounding environment of the first edge node 102a. At a first time- instance “ti”, the first processing circuitry 318 may determine first through sixth positions of the first object, based on the first through sixth sets of sensor outputs, respectively (as shown by arrow 738). In a non-limiting example, the first through sixth sets of sensor outputs may correspond to different co-ordinate systems with various dimensions. Further, owing to different principles of operation and different accuracies of the first through sixth sets of sensors 302-312, there may be small differences in the determined first through fifth positions. For example, the determined first position may correspond to a two-dimensional polar co-ordinate space. The determined second position may correspond to a two-dimensional cartesian space. Similarly, the determined third position may correspond to a two-dimensional cartesian space. The determined fourth position may correspond to a two-dimensional cartesian space. The determined fifth position may correspond to a three-dimensional cartesian space. The determined sixth position may correspond to a one-dimensional or two-dimensional cartesian space. Based on the first through fifth determined positions of the first object, the first processing circuitry 318 may determine a true current position (i.e., a current position) of the first object (as shown by arrow 740). The first processing circuitry 318 may determine the true current position of the first object using various sensor fusion techniques. Examples of sensor fusion techniques may include, but are not limited to, co-operative sensor fusion, complementary sensor fusion, likelihood sensor fusion, or the like. The first processing circuitry 318 may further perform spatio-temporal synchronization of the first through sixth sets of sensor outputs to ensure that at any time-instance the first through sixth sets of sensor outputs are indicative of a same object (i.e., the first object) at the same position. In a non-limiting example, the determined current position of the first object may correspond to a three-dimensional cartesian space. The current position of the first object at the first time-instance “ti” may be defined in terms of a latitude, a longitude, and an altitude of the first object “ti”.

Referring to FIG. 7D, the first processing circuitry 318 may track a movement of the first object, starting from the first time-instance “ti” (as shown by arrow 742). In other words, the first processing circuitry 318 may track current positions of the first object over a period of time (e.g., from the first time-instance “ti” to a second time-instance “t2” to a third time-instant “t3”), based on the first through sixth sets of sensor outputs generated by the first through sixth sets of sensors 302-312 throughout the period of time. Consequently, the first processing circuitry 318 may determine a current velocity of the first object (as shown by arrow 744). The determined current velocity of the first object may be indicative of a latitude, a longitude, an altitude, and a direction of the movement of the first object at a current time- instance (e.g., the third time-instant “t 3 ”). Consequently, the first processing circuitry 318 may forecast a trajectory of the first object in the geographical region 204 based on the tracked movement of the detected object (i.e., based on the first and second) in the geographical region 204 (as shown by arrow 746). For forecasting the trajectory of the first object, the first processing circuitry 318 may use various techniques such as, but not limited to, central limit theorem, extended Kalman filter, or the like. The forecasted trajectory may correspond to a series of future time-instances (e.g., fourth through seventh time-instances “U”- “tv”) and may include of a forecasted current position and a forecasted current velocity at each time-instance of the series of future time-instances. In other words, the forecasted trajectory of the first object includes a series of forecasted positions and a series of forecasted velocities of the first object.

The first processing circuitry 318 may communicate a second request to the first machine learning engine 320 (as shown by arrow 748). The second request may be a request for recognition and/or identification of the first object. The second request may include the third set of sensor outputs generated by the set of visible spectrum sensors 306. The third set of sensor outputs may include images and videos of the first object captured by the set of visible spectrum sensors 306. In an embodiment, the second request may further include the second set of sensor outputs generated by the set of IR sensors 304. The second set of sensor outputs may be indicative of a heat signature of the first object. Based on the second and third sets of sensor outputs, the first machine learning engine 320 may determine feature values for the seventh plurality of features associated with the seventh machine learning model 602g (as shown by arrow 750). In other words, the first machine learning engine 320 may determine a plurality of feature values for the seventh plurality of features based on the second and third sets of sensor outputs. The first machine learning engine 320 may provide the determined plurality of feature values as input to the trained seventh machine learning model 602g. Based on the inputted plurality of feature values, the seventh machine learning model 602g may determine a type of the first object. In other words, the seventh machine learning model 602g may recognize and identify the first object (as shown by arrow 752). In a non-limiting example, the seventh machine learning model 602g may recognize the first object as a human and may identify the first object as an adult male human. Consequently, the first machine learning engine 320 may communicate a second response to the first processing circuitry 318 (as shown by arrow 754). The second response may indicate that the first object is recognized and identified as an adult male human (i.e., the type of the first object).

Referring now to FIG. 7E, the first processing circuitry 318 may communicate a third request to the first machine learning engine 320 (as shown by arrow 756). The third request may be a request for determination of an intent of the first object and a threat level for the first object. The third request may include the type of the first object, the current position of the first object, the current velocity of the first object, and the forecasted trajectory of the first object. Based on the third request, the first machine learning engine 320 may determine feature values for the eighth plurality of features associated with the eighth machine learning model 602h (as shown by arrow 758). In other words, the first machine learning engine 320 may determine a plurality of feature values for the eighth plurality of features based on the information included in the third request. The first machine learning engine 320 may provide the determined plurality of feature values as input to the trained eighth machine learning model 602h. Based on the inputted plurality of feature values, the eighth machine learning model 602h may determine (i.e., output) an intent of the first object (as shown by arrow 760). In a non-limiting example, the determined intent of the first object is the first type of intent (i.e., suspicious).

Based on the third request, the first machine learning engine 320 may determine feature values for the ninth plurality of features associated with the ninth machine learning model 602i (as shown by arrow 762). In other words, the first machine learning engine 320 may determine a plurality of feature values for the ninth plurality of features based on the information included in the third request and the determined intent of the first object. The first machine learning engine 320 may provide the determined plurality of feature values (including the determined intent of the first object) as an input to the trained eighth machine learning model 602h for threat assessment of the detected object. Based on the inputted plurality of feature values, the ninth machine learning model 602i may determine (i.e., output) a threat level for the first object (as shown by arrow 764). In other words, the ninth machine learning model 602i may determine (i.e., output) a threat level to be assigned to the first object based on the inputted plurality of feature values. In a non-limiting example, it is assumed that the ninth machine learning model 602i determines a threat level of “3” for the first object. Consequently, the first machine learning engine 320 may communicate a third response to the first processing circuitry 318 (as shown by arrow 766). The third response may indicate the determined intent of the first object and the determined threat level for the first object. Based on the third response, the first processing circuitry 318 may assign the threat level (i.e., “3”), determined by the ninth machine learning model 602i to the first object (as shown by arrow 768). The threat level of “3” indicates that the first object is deemed to be a low threat to the secure area 202. The first processing circuitry 318 may compare the assigned threat level with the first threshold value (as shown by arrow 770). The first processing circuitry 318 may determine that the assigned threat level (i.e., “3”) exceeds the first threshold value (i.e., “0”).

Referring to FIG. 7F, based on the determination that the threat level assigned to the first object exceeds the first threshold value, the first processing circuitry 318 may initiate the threat alert procedure (as shown by arrow 772). In another embodiment, the threat level assigned to the first object may not exceed the first threshold value. In such a scenario, no threat alert procedure may be initiated by the first processing circuitry 318. Based on the initiation of the threat alert procedure, the first processing circuitry 318 generate a first threat alert (as shown by arrow 774). The first threat alert may be indicative of a first entity identifier of the first object, the current position of the first object, the current position of the first object, and the forecasted trajectory (i.e., the series of forecasted positions and the series of forecasted velocities corresponding to a series of future time-instances) of the first object. The first processing circuitry 318 may determine the entity identifier of the first object by way of a look-up table that may be stored in the first memory 322. The look-up table, stored in the first memory 322, may be indicative of a plurality of entity identifiers for a plurality of object types. Each entity identifier may be a numeric or an alpha-numeric code that uniquely identifies an object type. For example, the look up table may indicate that the first entity identifier is to be used for an object of a type that corresponds to “adult male human”. Similarly, a second entity identifier is to be used for an object of a type that corresponds to “adult female human”. Similarly, a third entity identifier is to be used for an object of a type that corresponds to “four-wheel vehicle”. FIG. 8 represents a process flow diagram 800 that illustrates communication of the generated first threat alert, in accordance with an exemplary embodiment of the present disclosure. FIG. 8 is explained in conjunction with FIGS. 7A-7F. The process flow diagram 800 involves the first edge node 102a, the remote node 104, and the first through third user devices 106a- 106c. In a current embodiment, it is assumed that the first memory 322 stores, therein, the escalation matrix 520.

The first edge node 102a may select, from the plurality of user devices 106, a set of user devices suitable for receiving the first threat alert (as shown by arrow 802). The set of user devices may be selected based on the information stored in the escalation matrix 520. The escalation matrix 520 may indicate that when the first threat level equals “3”, the first threat alert is to be communicated to the first through third user devices 106a-106c associated with the first authorization level. The first edge node 102a (i.e., the first processing circuitry 318) may communicate the first threat alert to the first through third user devices 106a- 106c (as shown by arrows 802a-802c). The first through third user devices 106a-106c may display, on corresponding display screens, information included in the first threat alert, e.g., the first entity identifier of the first object, the first threat level, the current position of the first object, the current velocity of the first object, the set of future positions of the first object, or the set of future velocities of the first object.

The first edge node 102a may further communicate the first threat alert to the remote node 104 (as shown by arrow 806). Based on the received first threat alert, the remote node 104 (i.e., the second processing circuitry 502) may generate a virtual environment based on the first threat alert and the terrain signature of the geographical region 204 as described in the foregoing description of FIG. 1 (as shown by arrow 808). The generated virtual environment may be one of the two-dimensional environment, the three-dimensional environment, the VR environment, or the AR environment. The generated virtual environment may include a digital version of the first object and the digital version of the geographical region 204. The second processing circuitry 502 may display or render the generated virtual environment on the display device 508 (as shown by arrow 810). When the generated virtual environment is displayed on the display, the generated virtual environment displays the digital version of the geographical region 204 overlaid with the digital version of the first object and the forecasted trajectory of the first object. The generated virtual environment may be viewed, using the display device 508, by any personnel at the remote node 104a.

FIG. 9 represents a process flow diagram 900 that illustrates communication of the first threat alert, in accordance with another exemplary embodiment of the present disclosure. FIG. 9 is explained in conjunction with FIGS. 7A-7F. The process flow diagram 900 involves the first edge node 102a, the remote node 104, and the first through third user devices 106a- 106c. In a current embodiment, it is assumed that the second memory 504 stores, therein, the escalation matrix 520.

The first edge node 102a (i.e., the first processing circuitry 318) may communicate the first threat alert to the remote node 104 (as shown by arrow 902). Based on the received first threat alert, the remote node 104 (i.e., the second processing circuitry 502) may generate the virtual environment (as shown by arrow 904). The second processing circuitry 502 may display or render the generated virtual environment on the display device 508 (as shown by arrow 906). When the generated virtual environment is displayed on the display screen of the display device 508, the generated virtual environment displays the digital version of the geographical region

204 overlaid with the digital version of the first object and the forecasted trajectory of the first object. The generated virtual environment may be viewed, using the display device 508, by any personnel at the remote node 104a.

The remote node 104a (i.e., the second processing circuitry 502) may select, from the plurality of user devices 106, a set of user devices suitable for receiving the first threat alert (as shown by arrow 908). The set of user devices may be selected, based on the information stored in the escalation matrix 520. The escalation matrix 520 may indicate that when the first threat level equals “3”, the first threat alert is to be communicated to the first through third user devices 106a- 106c associated with the first authorization level. The remote node 104a (i.e., the second processing circuitry 502) may communicate the first threat alert to the first through third user devices 106a- 106c (as shown by arrows 910a-910c). The first through third user devices 106a- 106c may display, on corresponding display screens, the information included in the first threat alert (as shown by arrows 912a-912c). In an embodiment, the remote node 104a (i.e., the second processing circuitry 502) may further communicate, to the first through third user devices 106a-106c, simulation data pertaining to the generated virtual environment. The simulation data may include data or information that enables the first through third user devices 106a-106c to render the generated virtual environment on the display screens of the first through third user devices 106a- 106c.

In an embodiment, the remote node 104 (i.e., the second processing circuitry 502) may generate a threat response based on the first threat alert. The threat response may be automatically generated, by way of a pre-defined algorithm, or by personnel at the remote node 104. Alternatively, the threat response may be selected by one of the first through third individuals and communicated to the remote node 104 by way of a corresponding user device of the first through third user devices 106a- 106c. The threat response may define a set of actions to be taken at the secure area 202 to mitigate the threat from the first object. In a non-limiting example, the secure area 202 may include therein, a controller (e.g., a programmable logic controller or a distributed control system) that is configured to control operations of various devices in the secure area 202. For example, the controller may be coupled to a set of doors situated at a perimeter of the secure area 202 and/or a set of lights situated at the perimeter of the secure area 202. In such a scenario, the remote node 104 (i.e., the second processing circuitry 502) may communicate the threat response to the controller in the secure area 202. Based on the threat response, the controller may issue a “close” instruction to the set of doors to close the set of doors. The controller may further issue an “ON” command to the set of lights, to switch on the set of lights.

FIGS. 10A-10C, collectively, represent a flow chart 1000 that illustrates a method for facilitating security management for the secure area 202, in accordance with an exemplary embodiment of the present disclosure. FIGS. 10A-10C are explained in conjunction with FIGS. 7A-7F and FIG. 8.

Referring to FIG. 10A, at 1002, the first and second pluralities of sensors generate the first and second pluralities of outputs based on the monitoring of the geographical region 204 by the first and second pluralities of sensors. At 1004, the first processing circuitry 318 determines a likelihood of a presence of an object (e.g., the first object) in the geographical region 204 based on one or more of the first through sixth sets of sensor outputs (as described in the foregoing description of FIG. 7B). In the current embodiment, it is assumed that the first processing circuitry 318 determines the likelihood of a presence of a moving object in the geographical region 204 based on the first set of sensor outputs. At 1006, the first processing circuitry 318 compares the determined likelihood (e.g., “0.8”) of a presence of a moving object with the second threshold value (e.g., “0.5”). In other words, the first processing circuitry 318 determines whether the determined likelihood exceeds the second threshold value. If at 1006, the first processing circuitry 318 determines that the determined likelihood does not exceed the second threshold value, 1008 is performed. At 1008, the first processing circuitry 318 determines that no object is detected yet. If at 1006, the first processing circuitry 318 determines that the determined likelihood exceeds the second threshold value, 1010 is performed. At 1010, the first processing circuitry 318 detects the current orientation of each sensor of the first through sixth sets of sensors 302-312 based on the seventh set of sensor outputs received from the set of IMU sensors 314. At 1012, the first processing circuitry 318 adjusts the current orientation of one or more sensors (e.g., the set of visible spectrum sensors 306) of the first and second pluralities of sensors to focus in a direction where the moving object (e.g., the first object) is likely to be present (as described in the foregoing description of FIGS. 7B and 7C). The first processing circuitry 318 detects the moving object (e.g., the first object) based on a combination of two or more sensor outputs of the first through sixth sets of sensor outputs.

Referring to FIG. 10B, at 1014, the first processing circuitry 318 determines the current position and the current velocity of the detected object based on the first and second pluralities of sensor outputs using the various sensor fusion techniques. At 1016, the first processing circuitry 318 tracks the movement of the detected object over a period of time (as described in the foregoing description of FIG. 7C). At 1018, the first processing circuitry 318 forecasts a trajectory of the detected object (e.g., the trajectory of the first object) in the geographical region 204 based on the tracked movement of the detected object. The forecasted trajectory includes a series of future positions and a series of future velocities of the detected object. The series of future positions and the series of future velocities correspond to a series of future time-instances. At 1018, the first processing circuitry 318 determines an intent of the detected object using the eighth machine learning model 602h. The intent of the detected object is one of the first type of intent or the second type of intent. For the sake of brevity, it is assumed that the intent of the detected object is the first type of intent. At 1020, the first processing circuitry 318 provides the determined intent as input to the ninth machine learning model 602h for threat assessment of the detected object (as described in the foregoing description of FIG. 7E). The ninth machine learning model 602i outputs a threat level, from the plurality of threat levels, for the detected object, based on the input (i.e., feature values corresponding to the ninth plurality of features). At 1022, the first processing circuitry 318 assigns the threat level outputted by the ninth machine learning model 602i to the detected object (e.g., the first object). At 1026, the first processing circuitry 318 compares the assigned threat level (e.g., “3”) to the first threshold value (e.g., “0”). In other words, the first processing circuitry 318 determines whether the assigned threat level exceeds the first threshold value. The detected object is deemed a threat if its assigned threat level exceeds the first threshold value. If at 1026, the first processing circuitry 318 determines that the assigned threat level does not exceed the first threshold value, 1002 is performed. If at 1026, the first processing circuitry 318 determines that the assigned threat level exceeds the first threshold value, 1028 is performed.

Referring to FIG. IOC, at 1028, the first processing circuitry 318 initiates the threat alert procedure. At 1030, the first processing circuitry 318 generates a threat alert (e.g., the first threat alert) based on the initiation of the threat alert procedure. The generated threat alert is indicative of the entity identifier of the detected object, the assigned threat level, the current position, the current velocity, and the forecasted trajectory of the detected object (e.g., the first object). At 1030, the first processing circuitry 318 selects, from the plurality of user devices 106, a set of user devices (e.g., the first through third user devices 106a- 106c) suitable for receiving the threat alert. The set of user devices are selected based on the authorization level (e.g., the first authorization level) associated with each of the set of user devices (as described in the foregoing description of FIG. 8). At 1032, the first processing circuitry 318 communicates the threat alert to the remote node 104 and each of the selected set of user devices.

FIG. 11 represents a flow chart 1100 that illustrates a method for facilitating security management for the secure area 202, in accordance with an exemplary embodiment of the present disclosure. FIG. 11 is described in conjunction with FIGS. 10A-10C. At 1102, the second processing circuitry 502 receives the threat alert and the terrain signature of the geographical region 204 from the first edge node 102a. It will be apparent to those of skill in the art that the terrain signature may not always be communicated to the remote node 104 along with the threat alert. The terrain signature may be communicated to the remote node 104 periodically (e.g., daily, weekly, monthly, or the like), since the geographical region 204 may be unlikely to undergo frequent terrain changes. At 1104, the second processing circuitry 502 generates a virtual environment based on the received terrain signature and the received threat alert. The generated virtual environment includes the forecasted trajectory of the detected object overlaid on the digital version of the geographical region 204. At 1106, the first processing circuitry 318 displays or renders the generated virtual environment on the display device 508.

FIG. 12 is a block diagram that illustrates a system architecture of a computer system 1200 for facilitating security management for the secure area 202, in accordance with an exemplary embodiment of the disclosure. An embodiment of the disclosure, or portions thereof, may be implemented as computer readable code on the computer system 1200. In one example, the plurality of edge nodes 102, the remote node 104, and the plurality of user devices 106 may be implemented in the computer system 1200 using hardware, software, firmware, non-transitory computer readable media having instructions stored thereon, or a combination thereof and may be implemented in one or more computer systems or other processing systems. Hardware, software, or any combination thereof may embody modules and components used to implement the methods of FIGS. 10A-10C, 11, and 12.

The computer system 1200 may include a processor 1202 that may be a special purpose or a general-purpose processing device. The processor 1202 may be a single processor or multiple processors. The processor 1202 may have one or more processor “cores.” Further, the processor 1202 may be coupled to a communication interface 1204, such as a bus, a bridge, a message queue, the communication network 108, multi-core message-passing scheme, or the like. The computer system 1200 may further include a main memory 1206 and a non-transitory computer readable medium 1208. Examples of the main memory 1206 may include RAM, ROM, and the like. The non-transitory computer readable medium 1208 may include a hard disk drive or a removable storage drive (not shown), such as a floppy disk drive, a magnetic tape drive, a compact disc, an optical disk drive, a flash memory, or the like. Further, the removable storage drive may read from and/or write to a removable storage device in a manner known in the art. In an embodiment, the removable storage unit may be a non-transitory computer readable recording media.

The computer system 1200 may further include an input/output (I/O) port 1210 and a communication infrastructure 1212. The I/O port 1210 may include various input and output devices that are configured to communicate with the processor 1202. Examples of the input devices may include a keyboard, a mouse, a joystick, a touchscreen, a microphone, and the like. Examples of the output devices may include a display screen, a speaker, headphones, and the like. The communication infrastructure 1212 may be configured to allow data to be transferred between the computer system 1200 and various devices that are communicatively coupled to the computer system 1200. Examples of the communication infrastructure 1212 may include a modem, a network interface, i.e., an Ethernet card, a communication port, and the like. Data transferred via the communication infrastructure 1212 may be signals, such as electronic, electromagnetic, optical, or other signals as will be apparent to a person skilled in the art. The signals may travel via a communications channel, such as the communication network 108, which may be configured to transmit the signals to the various devices that are communicatively coupled to the computer system 1200. Examples of the communication channel may include a wired, wireless, and/or optical medium such as cable, fiber optics, a phone line, a cellular phone link, a radio frequency link, and the like. The main memory 1206 and the non-transitory computer readable medium 1208 may refer to non-transitory computer readable mediums that may provide data that enables the computer system 1200 to implement the methods illustrated in FIGS. 10A-10C, 11, and 12.

The disclosed methods encompass numerous advantages. The disclosed methods describe the deployment of the edge node network (i.e., the plurality of edge nodes 102) for monitoring the geographical region 204, and thereby facilitating completely automated security management of the secure area 202. Each edge node (e.g., the first edge node 102a) of the plurality of edge nodes 102 is equipped with various types of sensors (e.g., radar sensors, IR sensors, visible spectrum sensors, or the, like) for detection of moving objects on the geographical region 204. This enables reliable and robust detection of objects in varied weather and visibility conditions. Further, sensor outputs (e.g., the first through seventh sets of sensor outputs) generated by sensors (e.g., the first through seventh sets of sensors 302-314) at an edge node (e.g., the first edge node 102a) are processed by processing circuitry (e.g., the first processing circuitry 318) at the edge node itself. This results in little to no latency in detection, tracking, and threat assessment of moving objects in the geographical region 204. Sensor fusion techniques are used for detecting, tracking, and monitoring the moving objects. This enables leveraging of strengths of the various types of sensors included in the edge node for the detection of the moving objects, tracking movements of the moving objects, and forecasting trajectories for the moving objects. The sensor fusion techniques allow for quick and highly accurate measurements of current positions, current velocities, and trajectories detected moving objects as compared to conventional techniques that rely on singular sensor outputs. The disclosed methods also describe determination of an intent and threat level of each detected object using machine learning (e.g., the eighth and ninth machine learning models 602h and 602i). This enables automated determination of intent of each detected moving object, based on various criteria such as, but not limited to, a type of a corresponding moving object, a current position of the corresponding moving object, a current velocity of the moving object, a forecasted trajectory of the moving object, a time of day, or the like. Machine learning-based assignment of threat levels allows for automated threat assessments of detected objects (i.e., moving objects). Complete automation of detection, tracking, intent determination, and threat assessment of objects, results in quick generation of threat alerts for the detected objects.

The disclosure further describes generation of virtual environments based on the received threat alerts and the terrain signature of the geographical region 204. This allows for accurate and true- to-life visualization of movement of detected objects (e.g., the first object), in the geographical region 204, by personnel associated with the security management of the secure area 202. Threat alerts (e.g., the first threat alert) are communicated to user devices (e.g., the first through third user devices 106a- 106c), based on threat levels associated with the threat alerts and an authorization level associated with each user device. This allows for communication of security alerts to various individuals on a need-to-know basis.

Further, the plurality of edge nodes 102 may communicate threat alerts and not, raw sensor outputs (e.g., the first through seventh sensor outputs) or data, to the remote node 104. This allows for seamless performance of the plurality of edge nodes 102 and the remote node 104 even if the communication network 108 has a low data throughput rate.

Techniques consistent with the disclosure provide, among other features, systems and methods for facilitating security management of the secure area 202. While various exemplary embodiments of the disclosed systems and methods have been described above, it should be understood that they have been presented for purposes of example only, and not limitations. It is not exhaustive and does not limit the disclosure to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practicing of the disclosure, without departing from the breadth or scope. While various embodiments of the disclosure have been illustrated and described, it will be clear that the disclosure is not limited to these embodiments only. Numerous modifications, changes, variations, substitutions, and equivalents will be apparent to those skilled in the art, without departing from the spirit and scope of the disclosure, as described in the claims.