Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
A PUSH UPDATE SYSTEM
Document Type and Number:
WIPO Patent Application WO/2008/029261
Kind Code:
A3
Abstract:
A push update system for a security system having a plurality of network nodes connected in a hierarchy to a root node, including: (i) an upstream agent of an upstream node for sending updates for respective downstream nodes; (ii) a schedule agent for scheduling processing by the upstream agent; (iii) a downstream agent of a downstream node for receiving and storing updates; and (iv) an update agent for processing received updates to queue updates for a downstream node. The root node includes the upstream agent and the schedule agent. Leaf nodes include the downstream agent, and intermediate nodes include all four agents. The updates include Internet threat signatures for Internet protection appliances of the leaf nodes.

Inventors:
WEBB-JOHNSON MARK CRISPIN (CN)
Application Number:
PCT/IB2007/002567
Publication Date:
May 29, 2008
Filing Date:
September 06, 2007
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
NETWORK BOX CORP LTD (CN)
WEBB-JOHNSON MARK CRISPIN (CN)
International Classes:
H04L12/24
Foreign References:
CN1350230A2002-05-22
EP1553492A22005-07-13
US20030204624A12003-10-30
Other References:
See also references of EP 2055049A4
Download PDF:
Claims:

CLAIMS:

1. A push update system for a security system having a plurality of network nodes connected in a hierarchy to a root node, including: an upstream agent of an upstream node for sending updates for respective downstream nodes; a schedule agent for scheduling processing by said upstream agent; a downstream agent of a downstream node for receiving and storing updates; and an update agent for processing received updates to queue updates for a downstream node.

2. A push update system as claimed in claim 1, wherein an intermediate node of the hierarchy includes said upstream agent, said schedule agent, said downstream agent and said update agent.

3. A push update system as claimed in claim 1, wherein said root node of the hierarchy includes said upstream agent and said schedule agent.

4. A push update system as claimed in claim 1, wherein a leaf node of the hierarchy includes said downstream agent.

5. A push update system as claimed in claim 4, wherein said leaf node comprises an Internet protection appliance.

6. A push update system as claimed in any one of the preceding claims, wherein said updates include at least one of the following:

(i) files maintained on an upstream node;

(ii) files maintained on a downstream node that are backed up on an upstream node;

(iii) signatures;

(iv) packages for one time delivery and installation, such as self-extracting files; and (v) commands to be run on a downstream node of the hierarchy.

7. A push update system as claimed in claim 1, wherein said downstream agent executes commands of an update.

8. A push update system as claimed in claim 1, wherein said downstream agent downloads updates for the downstream node, stores the data of the update, and determines whether installation of a package or commands are to be executed.

9. A push update system as claimed in claim 8, wherein said downstream agent validates credentials of a connecting upstream node.

10. A push update system as claimed in claim 1, 8 or 9, wherein said upstream agent is invoked with node identification (ID) data representing a downstream node, and determines a last update time the node identified by the node ID data received an update, collects updates required since the last update time, checks connectivity to the node and builds an update package with the required updates for delivery to the downstream node.

11. A push update system as claimed in claim 10, wherein said upstream agent deletes the update package when the delivery is unsuccessful, and sets the last update time.

12. A push update system as claimed in claim 1, 10 or 11, wherein said schedule agent processes updates queued for downstream nodes and based on node selection criteria data, invokes the upstream agent for a downstream node with corresponding node identification (ID) data.

13. A push update system as claimed in claim 12, wherein said node selection criteria data includes priority data and/or load balancing criteria data.

14. A push update system as claimed in claim 1, 12 or 13, wherein said update agent processes the received updates stored in a download directory for parameters to determine updates to be sent to a downstream node, and moves the updates for said downstream node to an output directory for processing by the schedule agent.

15. A push update system as claimed in claim 1, wherein signatures received by the root node are added to an update for a downstream node based on configuration data for the downstream node.

16. A push update system for a security system, including: a central server system for receiving threat signatures including an upstream agent for sending updates with said threat signatures for respective downstream nodes, and a schedule agent for scheduling operation of said upstream agent for a downstream node; and downstream nodes including a security appliance for receiving and storing said updates.

17. A push update system as claimed in claim 16, wherein said downstream nodes include: at least one intermediate node including said upstream agent, said schedule agent and a downstream agent for receiving and storing updates, and an update agent for processing received updates to queue updates for a downstream node; and leaf nodes including said security appliance and said downstream agent.

18. A push update system as claimed in claim 19, wherein said downstream agent validates credentials of a connecting upstream node, downloads updates for the downstream node, stores the data of the update, and determines whether installation of a package or commands are to be executed.

19. A push update system as claimed in claim 18, wherein said downstream agent validates credentials of a connecting upstream node.

20. A push update system as claimed in claim 16, 18 or 19, wherein said upstream agent is invoked with node identification (ID) data representing a downstream node, and determines a last update time the node identified by the node ID data received an update, collects updates required since the last update time, checks connectivity to the node and builds an update package with the required updates for delivery to the downstream node.

21. A push update system as claimed in claim 20, wherein said upstream agent deletes the update package when the delivery is unsuccessful, and sets the last update time.

22. A push update system as claimed in any one of claims 16 to 21, wherein said schedule agent processes updates queued for downstream nodes and based on priority data and/or load balancing criteria data, invokes the upstream agent for a downstream node with corresponding node identification (ID) data.

23. A push update system as claimed in claim 22, wherein said node selection criteria data includes priority data and/or load balancing criteria data.

24. A push update system as claimed in claim 17, 22 or 23, wherein said update agent processes the received updates stored in a download directory for parameters to determine updates to be sent to a downstream node, and moves the updates for said downstream node to an output directory for processing by the schedule agent.

25. A push update system as claimed in claim 16, wherein said downstream agent executes commands of an update.

26. A push update system as claimed in any one of claims 16 to 25, wherein said updates include at least one of the following:

(i) files maintained on an upstream node; (ii) files maintained on a downstream node that are backed up on an upstream node;

(iii) signatures;

(iv) packages for one time delivery and installation, such as self-extracting files; and (v) commands to be run on a downstream node of the hierarchy.

27. A push update system for a security system as claimed in any one of claims 16 to 26, wherein said signatures received by the central server system are added to an update for a downstream node based on configuration data for the downstream node.

Description:

A PUSH UPDATE SYSTEM

FIELD

The present invention relates to a push update system for a security system.

BACKGROUND

Network perimeter security systems are installed at the edge of local and wide area networks of entities to protect the networks from being compromised by external networks.

For instance, a connection to the Internet may be protected by a number of machines including a security server connected directly to the Internet, to protect against a wide variety of Internet threats, such as viruses, worms, trojans, phishing, spyware, SPAM, undesirable content and hacking. Configuration files of the security server include signatures or pattern files that are used as a basis to detect the threats and need to be updated on a regular basis. Given the frequency with which Internet threats change and are created, it is a significant challenge to ensure that the security servers are updated in a regular and timely manner. Security organisations, such as Symantec Corporation, Trend

Micro Incorporated and Kaspersky Laboratories, release notifications and data that is used to compile signatures for threats on a frequent basis (hourly or in some cases more frequent), requiring configuration files in large numbers of security servers to be correspondingly updated around the world.

Most security servers maintain or update their signature files by polling master or central servers for updates. This pull based approach means that the security servers will be on average out of date by the time of propagation of the update, from the polled server to the security server, in addition to half the time between polls. The propagation delay may also increase significantly when congestion occurs given thousands of machines located around the world may be polling the same server for an update.

Also the master or central server normally relies upon the polling server to advise of a polling server's current configuration or otherwise determine the updates that are required. The master server usually does not maintain any information regarding the configuration of the security servers. This communications requirement involves a further overhead that impacts on the efficiency of the update process. Also this requirement for bidirectional communication between the polling and master servers gives rise to significant difficulties when updates need to be performed at locations where the network connections, particularly the Internet connections, are not stable and are prone to failure.

Accordingly, it is desired to address the above or at least provide a useful alternative.

SUMMARY

In accordance with the present invention there is provided a push update system for a security system having a plurality of network nodes connected in a hierarchy to a root node, including: an upstream agent of an upstream node for sending updates for respective downstream nodes; a schedule agent for scheduling processing by said upstream agent; a downstream agent of a downstream node for receiving and storing updates; and an update agent for processing received updates to queue updates for a downstream node.

The present invention also provides a push update system for a security system, including: a central server system for receiving threat signatures including an upstream agent for sending updates with said threat signatures for respective downstream nodes, and a schedule agent for scheduling operation of said upstream agent for a downstream node; and downstream nodes including a security appliance for receiving and storing said updates.

DESCRIPTION OF THE DRAWINGS

Preferred embodiments of the present invention are hereinafter described, by way of example only, with reference to the accompanying drawings, wherein:

Figure 1 is a block diagram of a preferred embodiment of a security server connected to a local area network (LAN);

Figure 2 is a diagram of the architecture of a preferred embodiment of a security system; Figure 3 is a block diagram of a preferred embodiment of a node of the system; Figure 4 is a flow diagram of a downstream agent process of a node; Figure 5 is an upstream agent process of a node; Figure 6 is a flow diagram of a schedule process of a node; and Figure 7 is flow diagram of an update agent process of a node.

DESCRIPTION OF PREFERRED EMBODIMENTS

A security server 100, as shown in Figure 1, provides an Internet threat protection appliance to protect a local area network (LAN) 102 of an entity from a wide variety of Internet threats. The threats includes viruses, worms, trojans, phishing, spyware, SPAM, undesirable content and hacking, and any other form of unwanted code or intrusion into the LAN 102. The security server or box 100 is connected directly to an external communications network 104, such as the Internet, by a router 106, thereby being positioned between the LAN 102 and the Internet 104. The security server or box 100 may also provide support for a demilitarised zone (DMZ) 108 and may include a number of machines. The box 100 can, for example, be one of the threat protection appliances produced by Network Box Corporation. The network architecture in which the security server 100 is used can vary considerably. For example, a number of LANs or a wide area network (WAN) may be protected by one box 100, or the box 100 may support more than one DMZ.

A security system 200, as shown in Figure 2, includes a number of boxes 100 which are all

updated by configuration files delivered from a central or headquarters network operations centre (NOC) 202. The headquarters NOC 202 provides a root node of the security system 200, and the security boxes 100 are leaf nodes of the security system 200 and are connected in a hierarchy by intervening nodes 210 and 212 of intermediate levels in the hierarchy so that the security system 200 has a tree structure, as shown in Figure 2. The intervening nodes 210, 212 include regional NOCs 210 allocated to cover a geographic region, such as Australia and New Zealand, and customer NOCs 212 which may be allocated to serve one or more security boxes 100 of an entity. In alternative embodiments, the number of intermediate NOCs 210, 212 may be varied as desired or omitted altogether. The configuration files are delivered from an upstream node (typically, but not always, the root node 202) to downstream nodes (typically the leaf nodes 100) via the intermediate nodes 210 and 212 as updates using a push update system of the security system 200.

The box 100 and the nodes 202, 210 and 212 each include a central processing unit, volatile memory, permanent storage (eg flash memory, hard disk) and at least one network interface card for connection to the public and local networks. The box 100 and the nodes 202, 210, 212 can be implemented using general purpose computers. Also, ASIC based systems operating with flash memory can be used. The components 310 to 316 of the update system, discussed below, are implemented using computer program instruction code written in languages such as Perl, C, C++ and Java, and stored on the memory storage of the boxes 100 and nodes 202, 210 and 212. Alternatively, the processes performed by the components 310 to 316 may be implemented at least in part by dedicated hardware circuits, such as ASICs or FPGAs.

For the push update system, the nodes 210, 212, as shown in Figure 3, each include a downstream agent, recag, 310, an upstream agent, sendag, 312, a schedule agent, syncag, 314, and an update agent 316 which all run on the operating system 302 of the node 210, 212. The agents 310 to 316 utilise a database 320 maintained in the node 210, 212 by a database server 304, such as MySQL. The root node 202 has the same architecture as the intermediate nodes 210, 212 and runs instances of syncag and sendag, but does not include the downstream agent 310 for a downstream node nor the update agent 316 that is used by

an intermediate node 210, 212. The leaf nodes 100, ie the boxes 100, only need to include recag 310 as there are no downstream nodes following a leaf node. The box 100 does not run instances of the upstream agent 312, schedule agent 314 or the update agent 316. In other words, the nodes of the hierarchy of the update system all run recag 310 in their capacity as downstream nodes, and sendag 312 and syncag 314 in their capacity as upstream nodes, and the update agent 316 as an intermediate regional or customer node 210, 212.

The updates delivered by the push update system comprise configuration files that belong to one of five categories :

1. Files maintained on a NOC 202, 210, 212 that need to be sent, or pushed, to a downstream node. This includes new versions of executable files.

2. Files maintained on a downstream node 210, 212, 100 under management that need to be backed up on an upstream node, eg a NOC 202, 210, 212, and possibly restored on the downstream node from the upstream node at a later date.

3. Signatures. This includes signatures or pattern files for SPAM and malicious software (malware). The signatures are used to update the signatures held on the databases 320 of the boxes 100, and in most instances are the same for all of the boxes 100. Although the boxes 100 may have different configurations for dealing with Internet threats, the signatures used by the boxes are normally the same. The root node 202 may receive signatures regularly throughout the day, requiring the boxes 100 to be incrementally updated, as described below.

4. Packages for one time delivery. This includes files that are delivered once to a downstream node and for which there is no subsequent maintenance or monitoring. The packages may include self-extracting files for extraction and installation.

Accordingly, no subsequent synchronisation is required.

5. Jobs. The jobs include a series of commands to be run on a downstream node and then the results returned to an upstream node.

All of the updates are prepared before a connection is made to a downstream node, so the connection can be fully utilised once established. This is advantageous where Internet

connections are unreliable and the elapsed time during which the connection needs to be maintained needs to be minimised.

The downstream agent 310, recag, runs on a downstream node 210, 212, 100 and acts as an agent to receive delivered updates and execute commands, by executing a downstream process as shown in Figure 4. The downstream agent waits for connection requests (step 402) from upstream nodes, and on receiving a request will accept the connection. Connections between the nodes of the security system use available public communications networks and standard Internet protocols with appropriate cryptographic mechanisms. On accepting the connection, the agent seeks to validate identifying data and credentials, such as digital signatures, of the connecting upstream node (404). The process halts if the credentials are invalid, but if validated the agent 310 proceeds to download the update from a connecting upstream node (406). A validation process (408) is performed on the downloaded update to determine it is valid, and if not the process exits. Otherwise, if the data downloaded is valid, the update is stored in a download directory of the database 320 (409). A determination is made at step 410 as to whether the update is a Job. The downstream agent 310 executes the commands of the Job (412) and returns the results of the execution as an output (414) to the connecting upstream node. The agent 310 then proceeds to step 418. If at step 410, the update is determined to be a package, then the package is installed, for example by executing the self-extracted file, (416) by the agent 310. An acknowledgment status is then returned at step 418 to advise that the installation has been completed or that returned results are available. Delivery status of other updates is also reported. The instance of the agent 310 for the connection then completes and the agent 310 waits to spawn another instance for an incoming connection from an upstream node (420). Maintained configuration files and signatures are simply stored on the database 320 once validated (409).

The upstream agent, sendag, 312 runs on an upstream node 202, 210, 212 to perform an upstream transmission process, as shown in Figure 5. The upstream agent 312 connects to recag 310 on a downstream node and sends updates to that node. The upstream agent 312 is invoked with a node identification (ID) data variable as an argument. The node ID

identifies a downstream node to which an update package may be delivered. The node ID may be unique to a box 100 or an intermediate node 210, 212, and identifies a node immediately below the current node in the hierarchy running the instance of the upstream agent (step 502). On being invoked, the upstream agent 312 determines whether the last time the node identified by the node ID received a successful update (step 504) based on successful update time data 506 stored in the database 320. The agent 312 then determines (508) whether a connection can be made to the next node in the hierarchy. If a connection can be made to the node, then the downstream agent collects all update data required and modified since the last update time (510). The update data can be collected from a variety of sources, including an output spool directory 512 of the database 320 which includes updates received from other nodes. The package is built for delivery (514) so as to form the update and this is delivered to the downstream node (516). The upstream agent then receives the delivery status reported by the downstream node. If the installation is deemed to fail at step 518, then a delete package process (520) is performed so as to delete the package, as the update required when another delivery attempt is made may be different. If the installation is deemed to be correct (518) then the update time is stored in the database 320 of the current node running sendag (522) is updated.

The schedule agent, nbsync, 314 runs on an upstream node 202, 210, 212 and performs a schedule process, as shown in Figure 6. The agent 314 monitors the updates queued for downstream nodes and invokes instances of the sendag 312 to process them. The agent

314 accesses the output spool directory 512 of the database 320 to determine all of the downstream nodes for which there are updates queued for delivery (602). At step 604 a determination is made as to the node that has the highest priority for delivery of an update. This determination is based on priority data generated by a prioritisation process 606. For example, this may determine updates to be scheduled for delivery to intermediate nodes (ie other NOCs) before updates to leaf nodes 100. An instance of sendag 312 is then invoked, for the highest priority node with its node ID, when it is determined to be best to invoke that downstream agent process based on load balancing criteria data. The load balancing criteria data is produced by a load balancing process 610 . For example, the process may determine that updates are to be delivered using multiple Internet connections or balance

across them. The load balancing process 610 may also operate on data representative of the Internet topology to determine connections that should be established when transmitting to NOCs in a number of countries. If updates are being sent to a number of downstream NOCs, then the delivery process may need to be balanced so that each NOC receives updates in parallel rather than serial. Operation ofsyncag 314 then returns to step 602.

The update agent 316 performs an update watching process, as shown in Figure 7. The update agent 316 monitors the updates received by recag 310, and based on configuration parameters, such as filename pattern matches, copies particular updates to the output spool directory from the download directory of the database 320 for release by syncag 314 to downstream nodes. The update agent 316 parses all of the files in the download directory (702) for parameters that match configuration parameters 704 stored in the database 320. For any update files that meet the matching criteria, these are then moved to the output directory (706) of the database 320, for subsequent access by sendag 312.

The push update system is bandwidth efficient primarily because only updates that are required are transmitted when a connection is available. Signature updates may be received by the root node 202 on a regular basis, but are only delivered in their entirety at set periods. For example, the root node 202 may receive a number of signature updates through the day, but for delivery the root node bundles the signatures together, and incrementally if no connectivity is available. For example, as shown in the table below, the root node 202 may receive 1,922 signatures (numbered 100000 - 101921) over a given period, but these are compiled between event resets, as shown in the table below, so that if no connectivity is available during the period covered by the Updates 1 to 5, then when connectivity is established, the system only delivers Updates 4 and 5. The configuration of boxes 100 is controlled in each case by its upstream node, and only updates for a particular box's configuration that are required are delivered. Only the latest version of a file is delivered if multiple updates are queued for a box. For instance, if a box 100 requires an update to a sub-system x, which involves changes to files X 3 , x 7 and X 24 , then only those three files are delivered, instead of updating all components.

There is no negotiation between the updating upstream node and the downstream node being updated as to the updates that are required. This is determined by the updating upstream node, and again this reduces communications overhead.

In the push update system, the only communications overhead is the time of propagation from an upstream node to a downstream node, and therefore the time that a file is out of date on a downstream node does not depend on any time between polls, as in a pull based polling system. The push based update system is able to operate to determine the updates required when connectivity is not available, and uses connections efficiently when they are available.

A downstream node can also be configured to allow receipt of updates from more than one upstream node. This provides redundancy and also flexibility to configure for different updates to be sent from different upstream nodes.

Many modifications will be apparent to those skilled in the art without departing from the scope of the present invention, as herein described with reference to the accompanying drawings.

The reference in this specification to any prior publication (or information derived from it), or to any matter which is known, is not, and should not be taken as an acknowledgment or admission or any form of suggestion that that prior publication (or information derived

from it) or known matter forms part of the common general knowledge in the field of endeavour to which this specification relates.