Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND SYSTEM FOR MANAGING NETWORK UTILISATION
Document Type and Number:
WIPO Patent Application WO/2016/146853
Kind Code:
A1
Abstract:
Method and System for managing network utilisation uses data traffic groups. Incoming and outgoing data flows are assigned based on traffïe profile matching rules to a data traffïe group. For a data traffïe group a number of quality parameters, such as maximum loss and jitter are set. A data traffïe foreeast is made based on collected previous data traffïe information. For possible paths, such as tunnels, for data traffïe groups parameters such as available bandwidth and quality parameters are monitored. A traffïe data steering plan is made, using the gathered information and the data traffïe foreeast, and the data traffïe for eaeh data traffïe group is steered on basis of the data steering plan. In preferred embodiments additional bandwidth via a satellite is requested.

Inventors:
LOOS ERIC CHRISTIAN BERTHOLD (BE)
Application Number:
PCT/EP2016/056169
Publication Date:
September 22, 2016
Filing Date:
March 21, 2016
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
BELGACOM INT CARRIER SERVICES (BE)
International Classes:
H04L12/24
Foreign References:
US8811172B12014-08-19
US20100214920A12010-08-26
Attorney, Agent or Firm:
KIRKPATRICK (Avenue Wolfers 32, 1310 La Hulpe, BE)
Download PDF:
Claims:
Claims:

1. Method for managing network utilisation comprising:

establishing data traffic groups for a network, wherein for each data traffic group a number of traffic profile matching rules (71) and a number of traffic profile delivery rules (72) for data transport for a data traffic group are set,

assigning outgoing and/or incoming data traffic from or to a network to a data traffic group based on the data traffic matching rules (71),

sending and/or receiving assigned data traffic to and/or from a the network via a forwarding device (la, 3a)

- collecting data traffic information for each said data traffic group

providing a data traffic forecast (91) for data transfer for each data traffic group based on collected previous data traffic information for said each data traffic group,

comparing the actual data traffic to the data traffic forecast

monitoring available capacity and quality characteristics for transfer of data via links forming potential data traffic paths

making a data steering plan for steering the data traffic groups over the potential data traffic paths, wherein the steering plan is established by comparing and paring the capacity and quality characteristics of the data traffic paths to the traffic profile delivery rules (72) for a data traffic group.

- steering data traffic for each data traffic group based on the traffic steering plan via pared data traffic paths.

2. Method as claimed in claim 1, wherein when the capabilities of permanent paths are matched to the present data traffic or the data traffic forecast, and, if the permanent paths do not meet the requirements, a path via a satellite is requested to provide on demand resources via a satellite link.

3. Method as claimed in claim 1 or 2 wherein assigning and steering outgoing data traffic is performed in a service element (1) of a network connected to the forwarding device (la).

4. Method as claimed in claim 1, 2 or 3, wherein incoming data traffic is assigned and steered to a network in and via a central service element (3).

5. Method as claimed in any of the preceding claims, wherein a service element (1) of a network receives the number of traffic profile matching rules (71) and the number traffic profile delivery rules (72) sent by a user of the network via a central management element (4).

6. Method as claimed in claim 5, wherein the service element (1) also receives traffic profile historical data (73) sent by a user of the network via a central management element

(4)·

7. Method as claimed in claim 2, wherein a service element (1) of a network request on demand resources via a satellite by sending a request (R) to a central management element (4), the central management element allocating bandwidth on a path via a satellite, informing the service element (1) of such allocation (A), whereupon a novel steering plan is generated and data is sent via the satellite path.

8. Method as claimed in claim 7, wherein bandwidth via a satellite is pooled by more than one network and the central management element distribute the available bandwidth over the network cooperating in the pool.

9. Method as claimed in any of the preceding claims, wherein at least some of the outgoing data flows of a network bypass the steering plan, sent via monitored paths wherein when the monitoring indicates that a path is blocked or its parameters fall below a threshold, the data flows is attracted to a service element (1) and steered via the service element

10. System for managing network utilisation comprising:

- remote service elements (1) for a number of networks having a number of traffic profile matching rules for data traffic groups (71) and a number of traffic profile delivery rules (72) for data transport for data traffic group,

the remote service elements (1) being arranged for

assigning outgoing data traffic from a network to a data traffic group based on the data traffic matching rules (71 ),

sending assigned data traffic to and/or from a the network via a forwarding device (la,

3a)

collecting data traffic information for each said data traffic group

the remote service element having a data traffic forecaster for providing a data traffic forecast (91) for data transfer for each data traffic group based on collected previous data traffic information for said each data traffic group,

comparing the actual data traffic to the data traffic forecast monitoring available capacity and quality characteristics for transfer of data via links forming potential data traffic paths

making a data steering plan for steering the data traffic groups over the potential data traffic paths, wherein the steering plan is established by comparing and paring the capacity and quality characteristics of the data traffic paths to the traffic profile delivery rules (72) for a data traffic group.

steering data traffic for each data traffic group based on the traffic steering plan via pared data traffic paths.

11. System as claimed in claim 10, wherein the remote service elements are arranged for comparing he capabilities of permanent paths are matched to the present data traffic or the data traffic forecast and comprise an on-demand resource requestor for, if the permanent paths do not meet the requirements, request a path via a satellite is to provide on demand resources via a satellite link.

12. System as claimed in claim 10 or 11, wherein the system comprises a central service element for assigning incoming traffic to a data traffic group and steering assigned data flows to a network in and via the central service element (3).

13. System as claimed in any of the claims 10 to 12, wherein a service element (1) of a network comprises a receiver for receiving the number of traffic profile matching rules (71) and the number traffic profile delivery rules (72) sent by a user of the network via a central management element (4).

14. System as claimed in claim 13, wherein the service element (1) also receives traffic profile historical data (73) sent by a user of the network via a central management element (4)·

15. System as claimed in claim 11, wherein a service element (1) of a network comprises an on-demand requestor for requesting on demand resources via a satellite by sending a request

(R) to a central management element (4), the central management element comprises means for allocating bandwidth on a path via a satellite, and for informing the service element (1) of such allocation (A).

16. System as claimed in claim 11, wherein bandwidth via a satellite is pooled by more than one network and the central management element comprises means for distributing the available bandwidth over the network cooperating in the pool.

17. System as claimed in any of the claims 10 to 16, wherein at least some of the outgoing data flows of a network bypass the steering plan, sent via monitored paths wherein when the monitoring indicates that a path is blocked or its parameters fall below a threshold, the service element is arranged for sending out signals for attracting attracted to the service element (1).

Description:
METHOD AND SYSTEM FOR MANAGING NETWORK UTILISATION

The invention relates to a method for managing network utilisation and to a system for managing network utilisation.

A typical data communication network comprises many components and resources. These may comprise transmission lines, receivers, transmitters, antennas, routers, switches, gateways, satellite links, satellites, submarine cables and the like. Via links between components data traffic is sent from one component to another. The data may transfer from one subnetwork owned or operated by an entity such as a carrier via a border router via for instance a satellite link or a submarine or terrestrial cable to a border router of another subnetwork owned and operated by an entity such as a carrier or enterprise. Both subnetworks may be owned by the same entity, forming an internal network. Within such subnetwork there may be nodes and links to transfer the data from a source to a border router or from one border router to another border router or form a border router to a destination. Data is often also transferred from one network to another network via links such as satellite links or submarine and terrestrial cables.

Since the resources and components may require substantial expense to establish and operate, these resources are often used by many users. By sharing, those costs may be reduced. Because the networking resources are shared, use by one user may affect the ability of another user to use those same resources. This may be particularly noticeable during time periods of high network resource utilization. A high utilization of network resources may introduce queuing delays within a network, as data should wait to be transmitted or received until resources become available; if resources are not available in due time, some of the data will be discarded. In some environments, ensuring adequate network capacity to provide a high performance networking environment for all users may be achieved by building in some degree of overcapacity. For example, if a network is running at 50% capacity generally, contention for resources between users is infrequent, as the idle resources provided by the overcapacity can be utilized during periods of peak usage to mitigate any temporary contention that may develop. Such "brute force method" comes, however, at a price as the average use of the network as a whole is low compared to the costs. Therefore operating a data network at overcapacity to reduce contention for network resources often is economically inefficient. There is therefore a drive for an efficiently sharing of network resources between users. Several methods are known to manage shared network resources. From WO2014/031679 a method and system is known in which a metric is computed indicative of a subscriber's utilization of shared network resources. A network access control centre controls firewalls. The network access centre aggregates statistics for subscribers and depending on the aggregated statistics configures a firewall to reduce the network capabilities available to a subscriber, by limiting or blocking data traffic sent by a subscriber. The network access centre may determine whether a subscriber is exceeding the usage limit specified by a user's network data access plan. If certain criteria are met, the subscriber's network traffic is filtered to allow only data complying with certain capabilities (for instance only certain high importance type of data, while blocking more "frivolous" type of data) without limiting the upload or download speed. If a further criteria is met, the upload and download speed may be restricted in addition.

Although the known method and system does provide some advantages, it still is rather limited in flexibility and efficiency.

The present invention aims to increase the efficiency of network utilisation.

Method for managing network utilisation according to the invention comprises:

establishing data traffic groups for a network, wherein for each data traffic group a number of traffic profile matching rules and a number of traffic profile delivery rules for data transport for a data traffic group are set,

- assigning outgoing and/or incoming data traffic from or to a network to a data traffic group based on the data traffic matching rules,

sending and/or receiving assigned data traffic to and/or from a the network via a forwarding device

collecting data traffic information for each said data traffic group

- providing a data traffic forecast for data transfer for each data traffic group based on collected previous data traffic information for said each data traffic group,

comparing the actual data traffic to the data traffic forecast monitoring available capacity and quality characteristics for transfer of data via links forming potential data traffic paths

- making a data steering plan for steering the data traffic groups over the potential data traffic paths, wherein the steering plan is established by comparing and paring the capacity and quality characteristics of the data traffic paths to the traffic profile delivery rules for a data traffic group. steering data traffic for each data traffic group based on the traffic steering plan via pared data traffic paths.

The system according to the invention utilisation comprises:

remote service elements for a number of networks having a number of traffic profile matching rules for data traffic groups and a number of traffic profile delivery rules for data transport for data traffic group,

the remote service elements being arranged for

assigning outgoing data traffic from a network to a data traffic group based on the data traffic matching rules,

sending assigned data traffic to and/or from a the network via a forwarding device collecting data traffic information for each said data traffic group

the remote service element having a data traffic forecaster for providing a data traffic forecast for data transfer for each data traffic group based on collected previous data traffic information for said each data traffic group,

comparing the actual data traffic to the data traffic forecast

monitoring available capacity and quality characteristics for transfer of data via links forming potential data traffic paths

making a data steering plan for steering the data traffic groups over the potential data traffic paths, wherein the steering plan is established by comparing and paring the capacity and quality characteristics of the data traffic paths to the traffic profile delivery rules for a data traffic group,

steering data traffic for each data traffic group based on the traffic steering plan via pared data traffic paths.

The invention relates to a system that works with existing infrastructure to provide traffic management and flexible traffic routing over all the available transmission paths. The most relevant element of this solution is a Traffic Optimization Gateway (TOG), a distributed platform that manages the different network elements in addition to processing specific network data within the TOG to achieve the traffic management and flexible traffic routing. The objective of the TOG platform is to give a single point of management for a distributed service to provide a set of new functionalities: • Optimizes distribution of traffic flows between various paths such as fibre or submarine cables and satellite

• Reduces impact on end-user experience in case of path outage, such as submarine cable or fibre outages:

o Gives priority to critical traffic on the available network resources, such as, but not limited to satellite links,

o Throttles back non-critical traffic

o Increases bandwidth on the satellite link or other on-demand systems that expose a management interface as an API

o Optimizes the usage of fibre paths' capacity

• Traffic priorities and delivery performance targets are set by customers

The TOG platform is a VAS (Value added service) that will run on top of existing products and technologies integrated to provide an optimized Wide Area Network IP data service. This service will be deployed in a distributed environment, controlling the IP traffic with elements deployed on either or both sides of the Wide Area Network links which are to be managed by the TOG. As the TOG may provide optimized IP data services for multiple Customers, a centralized TOG service node may be paired with a wide number of remote service nodes.

Various aspects of the invention or embodiments of the invention are illustrated in the figures. Fig. 1 illustrates a basic view of the TOG Service method and platform.

A number of subnetworks, in figure 1 denoted with "customer network" are each provided with a remote TOG 1. A central TOG 2 is provided. The central TOG comprises a controller server/service element 3; furthermore the TOG system comprises a manager server/managing element 4 for managing the overall platform.

Traffic from the remote TOGs 1 is sent across the network in accordance with configurated rulesets. The service element 3 does identification and steering of data flows and collect measurements on paths through which data flows. The managing element 4 comprises a number of functions such as:

Satellite resource manager

. O&M

. Centralized Management

. Centralized alarm . Measure and report

. GUI (graphical User interface)

The objective of the TOG is to provide a set of functionalities and features.

The functionality covered by the TOG is Traffic Identification, Traffic classification and marking, optionally Stopping/Throttling of low priority traffic and Traffic Steering.

Additionally, there is an option to add the Satellite backup activation.

The information below presents a high level architecture design of the TOG platform. It provides a helicopter view of the TOG platform and how it will be integrated into the network.

Figure 2 illustrates the general set-up. The system comprises central service elements 3, remote service elements 1 and management elements 4.

The TOG service will preferably run over a distributed platform an example of which is shown in figure 3 providing a quick view of the TOG solution. On a central site a central TOG 2 is provided, on various remote sides remote TOGs having remote service elements 1 are provided. To a service element 1, 3 a forwarding device la, respectively 3a, such as a router or switch is associated.

The TOG platform preferably comprises:

• A single Management Element (ME) 4, see figure 1, for managing the overall platform on a central TOG

• Several Service Elements (SE), for processing the traffic of the TOG, on the central TOG and remote TOGs.

The different SEs will be similar but not identical due to the nature of the functionality that they will include. It can be distinguished between two different kinds of SEs:

• SEs of the Central TOG: The SE will be installed in one or more central locations.

This SE will interface the TOG platform to the Wide Area Network not under control of the TOG, e.g. the internet. Figure 1 illustrates a service element 3 of the central TOG. • SEs of the Remote TOGs: SEs will be installed in (or near) several key customer premises. This SE will interface the customer IP network.

Additional (intermediate) SEs are not required to process the TOG traffic. Figure 4 further briefly illustrates the architecture in which can be distinguished:

• A Central TOG 2, composed of the ME and the central SE, 4 respectively 3 see figure 1

• Several Remote TOGs comprising SEs 1, installed in the customer premises.

Figure 5 shows some functionalities of the Remote TOGs (left side) and the Central TOGs (right side). The main functionalities of each component are included.

For the outbound data traffic, thus data traffic going from a costumer subnetwork via a remote TOG 1 to a central TOG 2, the remote TOG 1 comprises means 41 for identification of data (to which data traffic group data belongs), steering of data, i.e. how to best steer data belonging to a certain data traffic group from the remote TOG 1 over possible links to the central TOG 2) and policy routing and QoS (which conditions determine which data belongs to which data traffic group, and to conditions routing of data traffic groups to the central TOG should meet).

The lower arrows show the primary traffic path mainly followed by the IP traffic in both directions. The upper paths are used by the TOG to provide both, additional bandwidth and a backup alternative to the traditional ones to be used in case of a network outage.

For outbound data traffic the central TOG comprises means for steering the data traffic onwards to the intended final destination.

Whereas, for outbound data traffic the remote TOG comprises means 41 , for inbound traffic the central TOG comprises means 42 for identification of data, steering and policy routing and QoS. For inbound traffic the remote TOG comprises means to steer the inbound traffic onwards towards the intended destination within the customers subnetwork.

Furthermore, in an SE knowledge about the state of the customer network and the view on the customer traffic demand is used to request the most appropriate level of additional on-demand capacity via a satellite connection. To this end service elements, often remote service elements 1 , but the same functionality could also be performed by central service element 2, gather performance data on logical paths created over both on-demand (i.e. satellite) and permanent paths. The logical paths may be constructed using any means that allow the path to be formed between the service element and a traffic delivery endpoint independent of other network flows in a way that bypasses routing decision making by intermediate network elements.

This may be accomplished by creating logical paths using Virtual Local Access Network technology, IP tunnelling techniques like GRE, or Multi Protocol Label Switching tags.

In tunnelling, the data are broken into smaller pieces called packets as they move along the tunnel for transport. As the packets move through the tunnel, they are encrypted and another process called encapsulation occurs. The private network data and the protocol information that goes with it are encapsulated in public network transmission units for sending. The units look like public data, allowing them to be transmitted across the Internet. Encapsulation allows the packets to arrive at their proper destination. At the final destination, de-capsulation and decryption occur.

There are various protocols that allow tunnelling to occur, including:

'Point-to-Point Tunnelling Protocol (PPTP): PPTP keeps proprietary data secure even when it is being communicated over public networks.

Authorized users can access a private network called a virtual private network, which is provided by an Internet service provider. This is a private network in the "virtual" sense because it is actually being created in a tunnelled environment.

•Layer Two Tunnelling Protocol (L2TP): This type of tunnelling protocol involves a combination of using PPTP and Layer 2 Forwarding.

Tunnelling is a way for communication to be conducted over a private network but tunnelled through a public network. This is particularly useful in a corporate setting and also offers security features such as encryption options.

Multiprotocol Label Switching (MPLS) is a type of data-carrying service for high- performance telecommunications networks that directs data from one network node to the next based on short path labels rather than long network addresses, avoiding complex lookups in a routing table. The labels identify virtual links (i.e. paths) between distant nodes rather than endpoints. MPLS can encapsulate packets of various network protocols, hence its name "multiprotocol".

The endpoint of a logical path connected to a Service Element may be another Service Element or any other network device capable of removing the logical path container and forwarding the carried network flows to the correct ultimate destination. Performance data will be gathered from the logical paths using available standards based or proprietary measurements techniques yielding at least information regarding packet loss, jitter and latency, for instance a network performance measurement such as RFC3432, RFC 3393, RFC 2330, RFC 2681 Alternatively or in addition, in the measurement approach, an SE sends one or multiple test packets with a predetermined payload at set intervals to another element suitable configured to process the test packets and to send them back to the originator of the test. By comparing the set of sent test packets with the received packets, the service element is able to determine whether packets went missing, what latency each packet experienced through the network and whether there is a variation in the latencies experience by each packet (jitter).

The performance measurements are preferably compared to a configurable threshold value. If the performance does not meet the threshold the path will be declared unavailable. The test may also be used in addition to the telemetry for periodically gauging the accuracy of telemetry measurements.

Figure 6 illustrates a part of the system and method of the invention. A user of customer 1 provides the ME 2 of the central TOG with traffic profiles TP c i for various data traffic groups or informs the ME of its choice of suggested traffic profiles or any combination thereof. This can be done via the internet.

The Management element of the central TOG then provides this traffic profile information to the service elements 2 and 1 of both the central TOG and of the customer 1. The service elements comprise a receiver for receiving the traffic profile information. Different sets of traffic profiles TP cl - and TP c i may be set by a customer for incoming and outgoing data traffic.

Figure 7 illustrates an example of data comprised in the traffic profiles. They comprise three parts, of which the third part is optional, but highly preferred.

The first part comprises matching rules. These matching rules allow determining to which data traffic group an IP flow is to be assigned. There can be a multitude of data traffic groups. The object is to group IP flows into data traffic groups. A number of matching rules are shown in part 71 of figure 7. The matching rules may comprise one or more of the following:

Source internet protocol address range

Destination internet protocol address range Internet protocol

Port

Type of service bit or diffserv code point

Furthermore a set of traffic profile delivery rules 72 are provided for a data traffic group. This specifies the requirement/wishes for delivery of each data traffic group.

The traffic profile delivery rules may comprise one or more of the following:

preferred path

maximum latency tolerance of matched traffic

Maximum packet loss of matched traffic

- Maximum jitter tolerance of matched traffic

Best approximation of tolerance or strict treatment (no matching paths will be calculated if maximal tolerance is not met)

Figure 7 shows the traffic profile for profile 1 , there may be in the set of traffic profiles rules for traffic profile 2, 3 etc.

As an example: there may be data traffic group requirements regarding the speed by which an IP flow is delivered; such requirements would typically result in a threshold value for maximum latency. There may also be requirement on jitter and loss. There may also be, for instance if there is a security requirement, a preferred or required path or paths. There may be for a group a set of requirements, for instance, for IP flows for which speed is of some importance, but quality is of the highest importance. For such groups the latency requirement will be set to a moderate value, but the acceptable jitter and loss will be very small. The delivery rules may also specify whether a best approximation is to be chosen or whether the delivery rules are strict. There may be a mix of the two. For instance: a data traffic group could be: specially encrypted IP flows. The traffic profile matching rules 71 could, as an example, specify that for any IP flow it is checked whether the IP flow is encrypted, and if so, by which encryption method. There would then be one or more data traffic groups within a set of profiles "encrypted". There could be a data traffic group with the profile: encrypted by this program, or by that program, within these two sets of profiles there could be specific profiles, specifying requirements on speed and possible loss. The traffic profile delivery rules may specify that for one group of encrypted IP flows the required paths must be paths A, B and C, and no other paths. The delivery rules for the path would then be restrictive, only specified paths, or paths of a specific type, can be used. For any such encrypted IP flow data traffic group there can also be set the maximum latency, maximum packet loss and/or maximum jitter. For these requirements a best approximation approach may be used.

Therefore, for some of the delivery rules there may be a strict approach, while for other delivery rules or combination of delivery rules there may be a best approximation approach. Finally, it is preferred that the data for traffic profile comprises, when being sent to the ME, a third component, namely traffic profile historical data 73 collected previously.

These historical traffic profile data would form the starting point for any planning and steering in the service element 1 or 2. Without starting historical data being supplied, calculation and regulation in the service elements of plans and steering would start from scratch, which could lead to start-up problems due to fluctuating and/or oscillating planning and steering, If such data would not be available such data cannot be sent to the ME, In that case, in embodiments instruction may be sent to the SE to first start with collecting data and start planning and steering after some data is collected. Alternatively, the ME may provide some "average starting conditions" for at least some of the historical data. Such data would amount to providing an "educated guess" by the ME to the SE. In figure 7 a set-up is shown comprising a number of traffic profiles. There may be one simple traffic profile in addition to this one: if an IP flow does not match any of the traffic profiles, then it is the default data traffic group, and is treated by a default procedure. This constitutes non-matched flow. Often this may mean that what can not be classified in any specified data traffic group is "the rest" and such IP flow is sent if there is any bandwidth left by whatever path, no matter how much packet loss or jitter etc, in short, the last in the line, In figure 7 a situation is schematically shown in which a set of rules is provided. The provided information may also be, or be related to an amendment, change or up-date of an already existing traffic profile TPQ. Examples are (not to be considered restrictive):

- An update of the maximum acceptable loss, latency or jitter.

For a particular traffic profile the maximum loss, latency or jitter, may have been set to severe or not severe enough and in an update this is corrected.

A change in the source or destination protocol address range: Someone within may have been promoted and henceforth his or her data is up-graded in importance. The same holds for a department that has gained importance. A destination of data may also have gained in importance (e.g. a small customer that has become an important player) or the opposite (e.g. a customer has gone bankrupt). The update in assignment of data sent or received to a more or less "important" data traffic group reflects said change. The other assignment rules or traffic rules may remain unchanged.

The central service element is provided with the information of traffic profiles 1, 2, 3 etc of the various customers.

The service element of customer 1 is only provided with the traffic profiles set by customer 1 as provided to the management element ME and provided to the SE of customer 1.

For security reasons, so as to make sure that the central SE always has the exact same traffic profile information as the SE of the remote TOG, the traffic profile information is passed via the central ME.

In addition to the customer specifying each data profile in detail, i.e. a customized data profile, there can be a number of suggested or proposed standard traffic profiles listed in a list provided at a central point, for instance "high quality profile", "low quality profile" low loss profile" , "encrypted, high quality profile" etc.

The customer chooses one or more of such standard traffic profiles, plus no, one or more personalised traffic profiles and provides the ME with its choices, upon which the ME provides the information to the remote and central SE. In the SE information on the settings of one or more standard traffic profiles may already be present. So for instance, if the SE comprises information on a number of proposed standard traffic profiles for standard data traffic groups, the information sent by the ME to the SE may be as simple as: "standard traffic profiles SI, S3 and S7 and, in addition, the following additional traffic profiles characterized by the following rules 71 and 72". The SE then has the information on matching rules, delivery rules and possibly also history for the traffic profiles. Figure 8 further illustrates an aspect of the invention.

The ME has provided the SE with the traffic profiles. Part A illustrates an IP flow entering the forwarding device la, 2a of service element 1, 2. The initial packets of the IP flow, usually the header, are compared to the matching rules 71. This allows establishing to which data traffic group the incoming IP flow is to be assigned. A signal is sent to the application in the SE signalling to which rules the IP flow matches and thus to which data traffic group it is assigned. This information is collected. The steering planner is provided with the information, and steers the IP flow to a specific logical port thereby steering it via the path chosen by the steering planner. Non-matched flows are sent to the default logical port. The subsequent packets are sent to the chosen logical port. Further data is collected and sent to the data collector.

Figure 9 illustrates the process in more detail.

The traffic data (to which data traffic group the IP flow is associated, when, how much) collected is provided to the data collector. These data is stored and an input for the traffic profile forecaster provider. The traffic profile forecaster provider provides a data traffic forecast (91) as input for the steering planner (92). Furthermore, through telemetry or the process illustrated in figures 10 and 11 measurements on the present parameters of paths, for instance capacity and parameters such as latency, jitter and/or packet loss is collected and supplied to the collector.

The steering planner is provided with a forecast for instance from the traffic forecast provider, and a comparison is made, either within or separate from the steering planner, as schematically illustrated in figure 12, between forecast of data traffic and actual situation. The steering planner makes a steering plan, to match the traffic paths for data traffic groups to the measured parameters and the steering plan determines the steering actions allowing the forwarding device la, 3a to steer the IP flow to the right path. The forecast is made on basis of collected traffic data. Additional information such as for instance which days are holidays or the switch of day light savings time to normal time may be used in the forecast in addition to using previously collected traffic data.

A possible method for collecting path parameters via direct action on available paths is schematically illustrated in figures 10 and 11.

Each Service Element independently uses performance measurements to determine the available paths to reach destinations to provide such information to the steering planner to create a steering plan. Based on this information the steering planner creates a steering plan, which for economic reasons will initially try to meet the demands, i.e. make a steering plan wherein for each or for most data traffic groups the used paths meet or match as best as possible the traffic profile delivery rules 72 by using permanent paths, avoiding using expensive satellite capacity.

If a steering planner determines that the available capacities on the permanent path(s) do not suffice, the Service Element will notify the Management Element of the additional satellite bandwidth that is required to accommodate all the traffic in the steering plan, with defined maximum. The steering plan is based on a forecast of the demand for bandwidth between the time of calculation of the steering plan and a configurable time-period.

Each Service Element collects information regarding the traffic flows passing the Service Element. In addition to the collecting of information passing the Service element, a service element may consider other sources of flow information already existing in the network of the customer, which are shared with the Service Element using standard protocols such as Netflow (Cisco Proprietary) or IP Flow Information Export (IPFIX, documented in RFC 3917). This information is then processed and summarized in 'Traffic Profiles', either defined by the end-user via an interface on the Management Element, or created internally by the Service Element to keep track of data flows exceeding a certain size threshold.

Based on the collected aggregated data within the Traffic Profiles, each Service Element constructs a historic usage pattern that is in turn used to create a prediction regarding anticipated traffic demand in the coming time-period for data traffic groups. This is performed in a traffic profile forecaster provider (91) as schematically illustrated in figure 9. The actual traffic levels are also measured and compared to the predictions; if there is a significant mismatch, a new steering plan will be created for a shorter time-period.

Figure 12 illustrates this method. In step 121 the traffic forecast provider generates a prediction for each traffic profile each T time period. In figure 12 this is schematically shown by the solid curve. In step 122 the traffic forecaster provider checks the netflow information to compare the current traffic used by each traffic profile to the predicted value. If the current traffic differs more than a standard deviation from the predicted value, in figure 12 schematically illustrated by point 123, the traffic forecast provider notifies the steering planner, indicating that actual traffic deviates from the prediction and preferably how much. The steering planner then makes a new steering plan order to create a new steering plan. The steering planner creates a new steering plan or if it concerns more than one profile new set of steering plans that will be applied by the traffic steering for a new time period, for instance for the next 24 hours.

Alternatively or simultaneously the steering planner may make a new, short term, steering plan for a shorter time period, for instance one hour and replaces the one that was created in the ling term, 24 hour steering plan by the new shorter term steering plan, reverting back to the long term steering after said one hour. . Most internet applications rely on TCP/IP (Transmission Control Protocol/Internet Protocol). This protocol has built-in mechanisms to ensure that a point to point TCP connection between communicating hosts uses as much bandwidth as possible. Therefore if there is a constraint in the total capacity, the TCP protocol will reduce the amount of bandwidth used in a session, but it will constantly try to revert to the highest possible throughput. The traffic prediction created by the Service Element takes into account how much bandwidth each traffic profile requires under normal circumstances.

By focussing on usage under normal circumstances, the traffic prediction allows a Service Element to estimate how much bandwidth a traffic profile will use when it is no longer constrained due to an event in the network.

Relying on a traffic prediction also allows the Service Element to request the ME to provide a correctly sized on-demand satellite capacity leading to a greater stability of the on-demand path.

Combining the network awareness, i.e. gathered knowledge of available paths and their properties and the business rules, i.e. matching rules and delivery rules, configured by the end-user in the central Management Element with regards to the relative priorities and requirements of configured Traffic Profiles, allows a steering plan to be created. The steering plan thus created represents a solution for the traffic distribution over the available permanent paths and the amount of satellite bandwidth that ideally should be activated on-demand.

An On-Demand Resource requestor (ODRR) in the SE, schematically shown in figure 9, uses the traffic predictions alone or in combination with actual data on properties of available paths to ensure that on-demand capacity, i.e. bandwidth via a satellite link, is requested only when absolutely required.

If the steering plan shows that the capacity of the available permanent paths will not suffice in the near future the ODRR sends a request for satellite bandwidth to the Management element. If a permanent path fails so that the remaining paths cannot meet the bandwidth requirements the ODRR also sends such a request.

The management element comprises an On-demand resource manager which receives this request and similar request of other remote SEs and, if possible, allocates bandwidth on the satellite paths and informs the SE of such allocation of satellite bandwidth. The information on the allocated satellite bandwidth is provided to the steering planner of the SE, which uses this information to make a new steering plan using the allocated satellite bandwidth for some or all IP flows. In figure 13, there is a demand of 150Mbit from left to right, whereas the permanent paths have 200Mbit of capacity each. Even if either Path A or Path B fails, the other path will have sufficient capacity to meet the demand. The ODRM will only activate the on-demand capacity if both paths would fail or the demand would increase beyond 200Mbit.

Various exemplary elements of the system and method will be discussed below

Central TOG

The Central TOG comprises:

• A Management Element 4: A dedicated server, e.g. a Linux server, running the distributed platform and service components for management.

• Service Elements: One dedicated Server, e.g. a Linux server, per SE running the distributed platform and service components for IP traffic processing.

• This server will also run dedicated software to take control over the switching elements and apply dynamic control over the IP traffic.

Those components may run over different servers, a virtualization of those components over the same hardware is possible.

For High Availability and scalability purposes, all the elements, Hardware and Software, could be multiplied.

One or Several Hardware switching devices: Routers or devices supporting forwarding control on flows of packet data, where the matching criteria to determine the action taken on the flow by an external application using an application interface can be for instance based on information present in the headers and the data of OSI model layers 2 through 7, will process the IP traffic of the central TOG.

Figure 14 illustrates the central TOG comprising Service Element and Router/Switch component with application interface and figure 15 illustrates the management element in the central TOG. The service element of the central TOG comprises a means 42A for identification and classification of data traffic, a means 42B for steering and a means 42C for QoS filtering and shaping. The management element comprises a GUI (graphical user interface). This GUI receives the Traffic profiles TPQ of the various customers and pushes this information to the remote SE of the customer.

The Management Element collects the requirements for on-demand capacity derived from the traffic predictions and the steering plan by the Service Elements. In this exemplary figure it is shown that the remote SEs comprise an ODRR which send its request R to the ODRM in the management element. Based on the configuration of the business rules and the available capacity under management, the Management Element will assign fully, partially or not at all the on demand satellite capacity requested by a Service Element and allocate an allocation A to the customer. This assignment can be realised in the network resources by way of Application Programming Interfaces (API) exposed by 3rd party network management systems. The solution may incorporate additional connectivity to effectively transmit configuration commands from the network management systems to the network elements. Unlike the current state of the art, the Steering Plan preferably shares the capacity based on forecasted traffic demand and in-line with defined business rules regarding the allocation of the bandwidth from a common pool, providing the Service Elements on-demand capacity which is guaranteed. The Management Element relays the information about the assigned capacity A to the Service Elements so they can use this information to fine tune their steering plans.

In figure 21 below an example of sharing satellite bandwidth is illustrated.

Remote TOG.

Each Remote TOG comprises:

• A generic processing component, e.g. a Linux server, running the distributed platform and all the service components for IP traffic processing and a forwarding component. A reason to multiply these components is to provide High Availability or scalability to the service.

o These servers may or will also run software provided to take control over the switching elements and apply dynamic control over the IP traffic.

• Virtualization is also supported in the Remote TOGs. • One or Several Hardware switching devices: Routers or devices supporting forwarding control on flows of packet data, where the matching criteria to determine the action taken on the flow by an external application using an application interface can be based on information present in the headers and the data of OSI model layers 2 through 7, will process the IP traffic of the central TOG. The invention is not restricted to a specific type of flow.

Figure 16 illustrates the remote TOG comprising Service Element and Router/forwarding component. The remote TOG comprises a means 41 A for identification and classification of data traffic, a means 4 IB for steering and a means 41 C for QoS filtering and shaping.

Global Network Diagram

The Network Diagram shown in figure 17 illustrates a global view of the overall network environment. This shows the different elements that there are preferably present to provide the TOG service.

As previously explained, there will be at least one Remote TOG in each customer subnetwork. This TOG will be able to process the inbound and outbound traffic of the customer. A defined subset of this traffic will pass through the Central TOG. The paths than can be used for this purpose can be for instance any of the submarine cables accessible by the customer and the recently established Satellite paths. There might be more than one satellite path. Supposing an initial path pre-established, the Central TOG could decide to modify it to carry a higher traffic throughput for this customer. Several cable and satellite paths connect the customer premises with the network containing the central TOG's.

The above gives the general outline of the system.

Figure 18 summarizes the set-up:

In this figure 18 the bypass is to the internet. The internet is not the only wide area network the TOG could be applicable to, might be the EU network of an African operator too.

The method of the invention comprises preferably a number of steps: - establishing data traffic groups, wherein for each traffic group a number of quality parameters for data transport are set for a network, assigning outgoing or incoming data traffic from or to a network to a data traffic group, sending and/or receiving assigned data traffic to and/or from a network via a forwarding device

making a data traffic prediction for data transfer for each data traffic group based on actual and previous data traffic information for said each data traffic group collected from the network,

monitoring available capacity and quality characteristics for transfer of data via links between networks via the various potential data traffic paths

making a data steering plan for steering the data traffic groups over the possible paths, wherein the steering plan is established by comparing and paring the capacity and quality characteristics of the possible paths to the quality parameters for a data traffic group optionally activate on demand capacity through application program interfaces exposed by third party resource or network management systems.

steering data traffic based on the traffic steering plan between subnetworks via shared internetwork links, the shared internetwork links forming various potential data traffic paths between the networks

For each network with a remote TOG, data traffic groups are established.

For each data traffic group a set of quality and/or business parameters is set.

This is done via the Management Element (ME) which distributes it to the relevant Service Elements, the customer can set e.g. acceptable delay, jitter, packet loss, but also business rules like e.g. desirability based on cost due to usage based billing methods.

The outgoing and/or ingoing traffic is assigned to a data traffic group; this can be done on for instance content (e-mail, for instance) or origin within the network (e.g. source address) or destination or based on a combination of source, destination, protocol and ports, where wildcards are supported to mean 'any value' for a certain parameter.

From and/or to each network via the TOG, a number of data traffic flows are transported. A central element is provided with actual information on the conditions and qualities of the possible paths between the remote TOGs and the central TOG, for instance available satellite connections and submarine cables. Conditions can be available bandwidth, delay, jitter, loss etc. The central element also is provided with present and past data traffic information. In various ways information on conditions of various possible paths may be obtained, for instance using Netflow, IPFIX or other standard protocols used for "flow accounting", the data on the conditions of various possible paths is read using known industry protocols such as SNMP (Simple Network Management protocol) for requesting interface statistics.

A prediction of the data in the data traffic groups going to and from the various subnetworks is made. This prediction and the quality parameters set for the data graphic groups (which may not be the same for each subnetwork) are compared to the conditions for the various available paths. Based on these comparisons and the priority for the data traffic groups a data steering plan is made and the various data steering groups are steered to and/or from the remote TOGs and central TOG in accordance with the steering plan. This makes dynamic steering possible of data traffic groups. For instance, from one subnetwork a first and a second data traffic group may be sent via satellite, while a third data traffic group is sent via submarine cable. When conditions change, for instance the predicted flow in the first of the data traffic groups changes, the steering plan can be dynamically adjusted, sending the second data traffic group (or part of the second data traffic group) also via a submarine cable and/or extra bandwidth may be, based on the prediction, set aside in the satellite link.

The data traffic belonging to a data traffic group can be distinguished by one or more parameters, for instance one or more of the following:

Source address; Destination address; Destination port; ToS value; Source port; Traffic protocol

Depending on the parameters that distinguish a data traffic group one or more filters are provided. Using the filters data attracted data traffic can be classified in a data traffic group on basis of match between parameters for a data traffic group and parameters of the data (precise or to a best approximation). Thus the data is identified as belonging to a data traffic group.

Based on the traffic profiles the traffic passing through the TOG routers is identified.

Data traffic groups are matched to a traffic profile to group them by quality and business parameters and/or priority compared to other data traffic groups.

The TOG supports a number, for instance up to 20, different traffic profiles per customer. For these Traffic Profiles, the user can define one or more of a number of parameters.

The TOG will apply this configuration for all the flows that are matched as part of this traffic profile. A number of parameters for the traffic profile are:

Max Packet Loss: The user can define what the maximum value of packet loss that can be tolerated for this traffic profile is. - Max Latency: The user can define what the maximum value of Latency or delay that can be tolerated for a traffic profile is.

Max Jitter: The user can define what the maximum value of Jitter or delay that can be tolerated for this traffic profile is.

The user also defines the priority of traffic profiles by the number on the list.

The traffic that cannot be classified is preferably treated as the default treatment. For instance in default treatment all the flows are balanced with the same percentage over all the available paths (including the satellite paths). Preferably, for each of the "Max" requirements the user can select, for instance by using a flag, whether, when a requirement cannot be met, the traffic is to be dropped or a path that comes closest to the set maximum value for packet loss, latency or jitter is to be selected.

Once the data traffic flows have been identified they are classified following the customer rules to handle them. The result of the Classification is preferably a List of suitable paths that could be applied to each traffic Profile. These lists discard those paths that cannot be used for a given traffic profile. After the classification, based on these lists, the Steering functionality, or steering planner will determine the Steering Plan, specifying which path will be used for each traffic profile for each customer. The classification allows the customer to specify what is the nature of some relevant traffic and how the TOG should handle it. The customer will configure this traffic classification creating a list of specific Traffic Profiles.

The system preferably comprises a steering planner. The Steering planner is in charge of creating the Steering Plan and determining when this steering plan should be modified.

The TOG will apply a Flexible Traffic Routing to manage the data traffic of the customer. The Steering planner considers 3 items to create the Steering Plans:

Applying the best path to the new flows at each moment depending on the network status. The classification provides a list of possible paths; the steering planner is provided with up-to-data on the paths and the predicted data for data traffic groups and combines the various information for an optimal steering plan.

· Optionally controlling the overall status of the network to use the Satellite Backup if necessary.

• optionally managing the Satellite Subsystem to add Satellite capacity resources if necessary.

The Steering Plan contains all the information required by the TOG Router and the Flow Manager How to route and mark the traffic.

The Steering planner is responsible for creating new Steering plans when the characteristics of the traffic changes to accommodate the routing to the new traffic arriving the TOG.

The steering planner receives various information and makes a steering plan using the received information.

The steering planner receives information from monitoring providers,

The Monitoring Providers continuously check the current values of traffic and the status of the paths. This means that they compare for instance:

- The amount of bandwidth of the current traffic with the predicted ones (to detect a prediction failure).

- The IP/SLAs values with the previous ones (to detect a change in a path).

In case of a significant change in path conditions, they notify the Steering planner so that the Steering plan can be changed by the steering planner. The Steering planner also is provided information by a traffic profile forecaster.

The steering plan is based on prediction for data traffic group flows.

When the Traffic Profile Forecast Provider detects that the current Traffic Bandwidth of any of the Traffic Profiles (i.e. data traffic group) differs more than the Standard Deviation from the predicted value, it notifies the Steering planner (and preferably also how much the deviation is) so that the steering planner may create a new Steering Plan for the changed situation.

The steering planner may also be notified of any change in the requirement set for data traffic groups, for instance a change in a maximum allowed jitter etc.

The Steering Plan contains the information required by the TOG Router and the Flow Manager How to route and mark the traffic for the data traffic groups.

The Steering planner is responsible for creating new Steering plans when the characteristics of the traffic changes to accommodate the routing to the new traffic arriving the TOG. If the Steering planner has a new Steering Plan, it provides this info to the Policy Manager to reconfigure the TOG router or routers according to this new Steering Plan so that the data traffic groups are transferred over the links in accordance with the altered steering plan. Whenever necessary or useful, the TOG will preferably attract the data traffic entering or exiting the customer network through defined paths. The methods preferably used in the invention to attract traffic to a remote TOG and/or a central TOG are different and separately usable for the upstream traffic (traffic from the customer to Internet) and for the downstream traffic (traffic from Internet to customer).

Upstream traffic attraction

To steer the traffic from the customer to Internet, a configuration should preferably be made that attracts the traffic to the Remote TOG in place of the PE routers that would receive it in a standard BGP routing environment.

A basic rule of routing is that more specific prefix announcements always take precedent. E.g. a packet to 10.10.10.1 always will follow a route to 10.10.10.0/24 over 10.10.0.0/16 independently of the metric, administrative distance or other parameters of the routes (the only exception is that a route won't be used if the associated next-hop is unreachable).

So, for the Remote TOG to attract the traffic, it should announce the same or more specific prefixes that are received from Internet. Using the same prefix won't provide a deterministic routing environment; traffic can go to the

PE or to the Remote TOG.

To assure that traffic is attracted:

- The Remote TOG will split the prefixes configured in the customer profiles to the minimal amount of more specific prefixes that allow them to be successfully distributed by the steering planner over the available paths and will announce them.

- The Remote TOG will announce the prefixes to all its iBGP peers with a local preference better than the one used in the customer network.

Using this configuration the TOG will attract all configured traffic to be steered.

Downstream traffic attraction

Steering the traffic from Internet to the customer has a different set of problems requiring a different solution.

· The Local Preference parameter works only inside the boundaries of an AS according to the BGP specification, so the preference it indicates cannot be relayed to other networks on the "Internet" or other wide area networks composed of several companies with their own AS.

• Traffic flowing from Internet to the customer can flow through the network comprising the Central TOG, but also through any other provider of the customer that will propagate the BGP routing announcements of the customer.

Due to the limited control on the preference associated to the announcements of the customer in other networks on the internet or other wide area networks composed of several companies with their own AS, the path of the incoming traffic flows depends on events and traffic management decisions of third party networks outside of the control of the network attempting to influence the incoming traffic flows.

To exert control over the path taken by the incoming traffic flows to align them with the steering plan, the invention preferably uses the disaggregation method explained previously, but with differences:

• Customer routes are announced, so the original announce from the customer is known. · The prefixes used by the customer and configured for steering will be split by the

TOG into several more specific prefixes and/or tagged with BGP communities made available by other networks, and announced. This requires that:

o The customer does not announce any one of the more specific prefixes by itself o The customer inserts in the routing database of the associated RIR all of the more preferred prefixes that can be steered (not necessarily steered in the present moment, but that can be in the future)

• The Remote TOG will generate and announce to the Central TOG using eBGP these more specific prefixes defined in the policies of the customer.

• The Central TOG will redistribute those announces to the BGP speakers in its network.

• The PE's of the network hosting the central TOG will announce these prefixes to Internet. The BGP announcements will have the AS of the customer as origin and the AS of the network hosting the central TOG as transit. In some deployment scenarios, where the central TOG platform is integrated with the customer network, there will not be a transit AS.

• Furthermore preferably:

o BGP communities made available by other networks are documented in the Central TOG and mapped to generic traffic steering capabilities in the TOG

o The TOG makes a reasonable approximation of paths available to networks which are the source of traffic as defined in the data traffic profile by collecting multiple views of the interconnections between networks from the most interconnected global networks, for instance, the top 10 internet service providers worldwide.

o The TOG engages in a trial and error stage applying the steering techniques on customer BGP announcements with the sole purpose of validating the approximated model by verifying the intended shift of incoming traffic flows towards the intended path. If during the trial and error stage, the intended effect is not observed, the steering plan is improved.

As the prefixes are more specific and/or are only available through the intended paths due to a specific application of BGP communities, the incoming traffic flows will be directed towards the paths determined in the steering plan created by the TOG and thus under control of the TOG.

Steering in accordance with the data steering plan can be accomplished in various ways, an example is as follows:

The Traffic Steering summarizes all the activities that the TOG performs on the traffic to achieve routing it over the most desirable path depending on its classification. Depending on the Traffic classification - that is, the configuration for each traffic profile - the TOG will determine which path of the available is the best available one to handle each configured traffic profile.

The Result of the Traffic Steering functionality is to create a Steering Plan and to apply it. The Steering Plan contains several steering rules that determine how the traffic is distributed among the different paths.

The TOG will create a Steering Plan, containing all the routing rules to apply to all the Traffic Profiles for a specific period of time. This is basically:

o Selecting the path(s) over which the traffic is sent by setting the Next Hop in a

paths list.

o Putting balancing percentages to each path in the path list.

o Marking the Traffic with the appropriate DSCP Value in each path. The Steering planner is in charge of creating the Steering Plan and determining when this steering plan must be modified.

The TOG will apply a Flexible Traffic Routing to manage the data traffic flows of the Customer. The Steering planner considers 3 items to create the Steering Plans:

o Applying the best path to the new flows at each moment depending on the

network status.

o optionally controlling the overall status of the network to use the Satellite Backup if necessary.

o optionally managing the Satellite Subsystem to add Satellite capacity resources if necessary. In figure 19, providing an overview of the steering planner this is illustrated.

The Steering Plan contains the information required by the TOG Router and the Flow Manager to route and mark the traffic.

The Steering planner creates new Steering plans when the characteristics

of the traffic changes to accommodate the routing to the new traffic arriving the TOG.

If the Steering planner has a new Steering Plan, it provides this info to the Policy Manager to reconfigure the TOG router according to this new Steering Plan to steer the data traffic flows from a customer according to the new steering plan. The Steering planner preferably creates a new and sometimes totally different steering plan each T period. This means approximately a new router reconfiguration each T period of time. Normally, the Traffic Steering functionality will produce a Steering Plan that will be applied during a configurable period of time T. It contains the steering rules that the TOG must follow to correctly steer the traffic during this period of Time T. If the new Steering Plan is equal than the current one, no actions are taken.

To do the traffic steering, tunnels are preferably created between the Central TOG and the Remote TOG and vice versa. An important requirement for the tunnels is that they must follow the exact path for what they were designed. I.e. the tunnel created through carrier A should always transit through carrier A. If there is a connectivity problem with (or inside) carrier A, the tunnel preferably goes down, not be rerouted through other carriers. If the customer has more than one connection with one carrier, for each connection preferably a separate tunnel is created.

The invention can be summarised as follows:

Method and system for managing network utilisation uses data traffic groups. Incoming and outgoing data flows are assigned based on traffic profile matching rules to a data traffic group. For a data traffic group a number of quality parameters, such as maximum loss and jitter are set. A data traffic forecast is made based on collected previous data traffic information. For possible paths, such as tunnels, for data traffic groups parameters such as available bandwidth and quality parameters are monitored. A traffic data steering plan is made, using the gathered information and the data traffic forecast, and the data traffic for each data traffic group is steered on basis of the data steering plan. In preferred embodiments additional bandwidth via a satellite is requested.

The invention is not restricted to the examples given above, each and every combinations of the method steps and the devices shown above embodies the invention, taken alone or in any combination. Any part of the system may be implemented in hardware, software or in a combination of hard- and software. Figure 20 illustrates an example of a slightly different embodiment of the invention. Not all the data flows need to pass via the SE's.

In circumstances, it may be that a customer wants at least some data flows, whether inbound or outbound, to bypass the SE.

Nevertheless, the customer may want to be sure that if there is a problem with the inbound or outbound data flows that bypass the SE, the system of the invention takes measures to ensure safe and sound delivery of such data flow. The system then monitors the paths, but not the actual data flows through the paths.

Figure 20 illustrates such an embodiment. Some data flows bypass the service element SE. Such flows can be sent via an unmanaged path or via a monitored path. The SE does not control what is sent via the unmanaged path or what is sent via the monitored path. For the monitored path, however, the system does monitor the availability and/or other parameters such as loss, latency and jitter. This can be done, as illustrated, by the SE sending itself test packages via a route including the monitored path or monitoring the path via telemetry. If the path fails, or some parameter of the path (packet loss for instance)_drops below a threshold, chosen by the customer as the boundary between acceptable and not acceptable, the SE sends a BGP (Border Gateway Protocol) announce to the relevant forwarding devices (routers), thereby attracting the traffic that would be sent through the failed path to the SE. The SE may have permanent access to a satellite link and/or send a request R for satellite bandwidth to the ME. The SE can then send the data which would otherwise not arrive at its destination to its destination via an alternative path for instance, as will often be the case and illustrated in figure 20, via a satellite connection. In the right hand part of figure 20 this is illustrated by the dotted arrows showing the data traffic being sent over a satellite connection to the intended destination. In the figure at least some of the outgoing data flows of a network bypass the steering plan, sent via monitored paths wherein when the monitoring indicates that a path is blocked or its parameters fall below a threshold, the data flows is attracted to a service element (1) and steered via the service element (1).

This embodiment on the one hand provides costumers the freedom to bypass the SE for some data flows, thereby reducing costs, but on the other hand the safety and convenience that the present invention provides in case a monitored path falls. For paths that are unmanaged as well as unmonitored, the customer takes the risk of failure of the path. For instance the customer could arrange for certain data flows from certain forwarding devices to be attracted to the SE, but, in normal circumstances other flows not to be attracted to the SE. However, in case the chosen path for such other flows fails, the customer may have these data flows attracted to the SE. This can be applied both for outbound as well as for inbound data flows. While the data is sent via the satellite link, the SE preferably keeps on monitoring the failed path. If the SE measures that the monitored failed path is restored or improved to the extent that it again is acceptable, it sends a message to the relevant forwarding device to stop sending the data to the SE and resume sending the data via the monitored path reverting to the situation prior to failure. Figure 21 illustrates a sharing of satellite bandwidth. In this example three customers Operator 1, Operator 2 and Operator 3, share bandwidth. The shared bandwidth pool is 300 Mbit/s. Operator 1 and 3, when there is no failure in the path or paths they used, do not need satellite bandwidth. However, when there is a failure in the permanent paths said customers use, the steering planner in the remote SE will notice that the capacity of the permanent paths is or soon will be inadequate and send via the ODRR an on-demand request for satellite bandwidth to the ODRM in the management element. Operator 1 & 3 may pay an insurance fee for the maximal bandwidth. When there is a on-demand request the central element allocates the bandwidth over the operators taking into account assigned priority and weight. In this example bandwidth via a satellite is pooled by more than one network and the central management element (ME) distributes the available bandwidth over the network cooperating in the pool. To this end the central element has gathered data on the agreements between the member of the pool and assigns, when a request for on-demand resources is sent by a service element to the central element bandwidth on the satellite link in accordance, or in best accordance, with the agreement.

Abbreviations:

PE provider edge

AS Autonomous System

BICS Belgacom International Carrier Services

RIR Regional Internet Registry

BGP Border gateway Protocol

IP/SLA IP Service Level Agreements

SE service element

ME management element

ODRR On demand resource requestor

ODRM On-Demand Resource Manager

VAS Value added service